Next Article in Journal
Evaluating Late Blight Severity in Potato Crops Using Unmanned Aerial Vehicles and Machine Learning Algorithms
Next Article in Special Issue
Building Extraction in Very High Resolution Imagery by Dense-Attention Networks
Previous Article in Journal
Fusion of Multi-Baseline and Multi-Orbit InSAR DEMs with Terrain Feature-Guided Filter
Previous Article in Special Issue
Detecting Building Edges from High Spatial Resolution Remote Sensing Imagery Using Richer Convolution Features Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection

1
School of Information and Communication Technology, Griffith University, Nathan, QLD 4111, Australia
2
Faculty of Information Technology, Monash University, Clayton, VIC 3800, Australia
*
Author to whom correspondence should be addressed.
Authors contributed equally to this work.
Remote Sens. 2018, 10(10), 1512; https://doi.org/10.3390/rs10101512
Submission received: 13 June 2018 / Revised: 10 September 2018 / Accepted: 19 September 2018 / Published: 21 September 2018
(This article belongs to the Special Issue Remote Sensing based Building Extraction)

Abstract

:
Three-dimensional (3-D) reconstruction of building roofs can be an essential prerequisite for 3-D building change detection, which is important for detection of informal buildings or extensions and for update of 3-D map database. However, automatic 3-D roof reconstruction from the remote sensing data is still in its development stage for a number of reasons. For instance, there are difficulties in determining the neighbourhood relationships among the planes on a complex building roof, locating the step edges from point cloud data often requires additional information or may impose constraints, and missing roof planes attract human interaction and often produces high reconstruction errors. This research introduces a new 3-D roof reconstruction technique that constructs an adjacency matrix to define the topological relationships among the roof planes. It identifies any missing planes through an analysis using the 3-D plane intersection lines between adjacent planes. Then, it generates new planes to fill gaps of missing planes. Finally, it obtains complete building models through insertion of approximate wall planes and building floor. The reported research in this paper then uses the generated building models to detect 3-D changes in buildings. Plane connections between neighbouring planes are first defined to establish relationships between neighbouring planes. Then, each building in the reference and test model sets is represented using a graph data structure. Finally, the height intensity images, and if required the graph representations, of the reference and test models are directly compared to find and categorise 3-D changes into five groups: new, unchanged, demolished, modified and partially-modified planes. Experimental results on two Australian datasets show high object- and pixel-based accuracy in terms of completeness, correctness, and quality for both 3-D roof reconstruction and change detection techniques. The proposed change detection technique is robust to various changes including addition of a new veranda to or removal of an existing veranda from a building and increase of the height of a building.

Graphical Abstract

1. Introduction

The fundamental task of building reconstruction is the transformation of low-level building primitives (e.g., lines and planes) to a high-level model description. In 3-D change detection, it is inspected whether there are changes in buildings over a period in terms of new, modified, and/or demolished buildings and/or building-parts. The reconstruction step can be considered as an essential prerequisite for 3-D change detection, particularly for detection of informal buildings or extensions and for update of 3-D map database. In a 3-D map database, there are buildings along with other important man-made objects such as roads and electric power lines. A direct comparison of 3-D building models generated from a recent dataset to the models in the (old) map database will not only identify the changes in buildings but also help an effective and efficient update of the database. In practice, there will be only a small number of buildings being changed in an area for a given period of time, unless this is a newly built-up area or an area hit by a calamity (e.g., bushfire or earthquake). Therefore, automatic modelling and change detection steps will be helpful in indicating the potential changed areas of buildings in a user interface, where a human operator can quickly accept and/or reject the indications and update the database accordingly [1]. Moreover, the state and local government officials can check if the indicative changes were previously authorised or not. Thus, they can send inspectors to the unauthorised (informal) areas only, instead of all areas, saving both money and time.
Many scientists have developed building reconstruction and change detection techniques utilising image information only, others have utilised LiDAR (Light Detection And Ranging) point cloud, and some have attempted to integrate LiDAR and aerial images for several Geographic Information System (GIS) applications, including city mapping, map database updating, disaster estimation, and city infrastructure planning. However, among spatial data researchers and mapping professionals, LiDAR data have gained popularity because of fast pulsation, precision, and accuracy in capturing 3-D geo-referenced spatial information about buildings, roads, vegetation, and other objects expediently at a high point density. These characteristics make these data feasible to examine natural and built environments across a wide range of scales for automatic reconstruction and change detection in buildings and their distinct features. Recent improvements in the automation of building reconstruction and change detection methods are reducing the labour and time consumption in these applications.
The 3-D reconstruction of building models and change detection include several non-trivial processes, such as segmentation, classification, structuring, hypothesis generation, and geometric modelling. A seamless integration of these in a conventional way would be not only unrealistic but also labourious. This paper, therefore, presents a workflow that uses building roof planes for 3-D model generation and subsequently uses these models for change detection. To achieve our goals and address the particular challenges, this research presents two techniques aiming at 3-D reconstruction of building roof models and building change detection separately. The proposed techniques are entirely data-driven using LiDAR point cloud data only. The first technique, 3-D building roof modelling, reconstructs buildings represented at lower levels with coarse boundaries (3-D roof planes) to higher levels (3-D building models). The second technique, building change detection, subsequently uses the constructed 3-D building models and LiDAR data for identification of changes in buildings.
In Section 2, the related works are discussed. Section 3 provides the detail on challenges for 3-D building modelling and change detection methods, along with the contributions of this paper. The proposed 3-D building modelling and 3-D change detection methods are presented in Section 4 and Section 5, respectively. In Section 6, the dataset, parameter settings, experimental setup, and results are discussed. Finally, conclusions are presented in Section 7.

2. Related Works

In recent years, studies on building extraction, reconstruction and change detection have made significant advances and a wide range of methods have been proposed on façade segmentation and opening area detection [2], building extraction [3,4,5], change detection and map database update [1,6,7,8], roof plane extraction [9] and 3-D reconstruction [10]. A number of techniques [11,12] have also been proposed for evaluation of these methods. In addition, since different methods were evaluated using different datasets and evaluation techniques, there have been several attempts to benchmark them on common platforms [13,14].
In 3-D building roof reconstruction and change detection most early methods were manual, with the involvement of a trained human operator who performed accurate measurements. However, human intervention is not only expensive but also reduces the speed of execution in achieving high productivity and in processing large datasets. Recently developed reconstruction and change detection methods [1,15] aim to reduce these limitations in a semiautomatic manner.

2.1. 3-D Building Roof Modelling

The 3-D building reconstruction methods can broadly be classified into three categories on the basis of their processing strategies: model-driven, data-driven, and hybrid approaches [16,17,18]. A model-driven method uses a set of predefined building models (shape) and fits into the input data for the extraction purposes, in contrast to a data-driven method that uses the input data and extracts one or more features (e.g., lines and planes) for the detection and reconstruction of buildings. A hybrid method, on the other hand, exhibits the characteristics of both model- and data-driven approaches.
Among the model-driven methods, Oude Elberink and Vosselman [19] proposed a graph matching approach to handle both complete and incomplete laser data. While a complete matching of data with a target model allows an automated 3-D reconstruction of a building roof, an incomplete match leads to a manual interaction for a correct model. To reduce human interaction, Xiong et al. [20] proposed a graph-based error correction for roof topology. A graph edit dictionary that stores representative erroneous subgraphs was used to automatically identify and correct errors.
Kim and Shan [10] proposed a novel data-driven roof plane segmentation and building reconstruction technique using airborne LiDAR data. Although this technique shows good results, it suffers from over-segmentation and neglects the effect of vegetation in the segmentation process. Jung et al. [21] presented a reconstruction technique to develop 3-D rooftop models at city-scale from LiDAR data. Although the experimental results showed a good performance for large buildings, some small roof planes were not detected, and were therefore not reconstructed. Moreover, this method suffers from under-segmentation issues since many roof planes were merged into their adjacent clusters. Wu et al. [22] offered a graph-based technique to reconstruct urban building models from airborne LiDAR data. Building models were reconstructed by gluing all individual parts of the building models obtained from bipartite graph matching into a complete model. Although this technique provides detailed models, it fails to capture the sides of buildings and produces high geometric distortion resulting in low completeness and high modelling errors.
Rottensteiner et al. [14] reported comparative research results for urban object detection and 3-D building reconstruction using the ISPRS (International Society for Photogrammetry and Remote Sensing) benchmark datasets. They selected fourteen different building reconstruction methods and evaluated their performances on the basis of different quality metrics.

2.2. 3-D Building Change Detection

Based on the input data sources, the building change detection methods can be categorised into three groups: image only, LiDAR only or integration of LiDAR and image-based methods. The image-based methods are mainly for 2-D change detection and are mostly unable to differentiate between partially-modified buildings and planes. For example, both Gu et al. [8] and Leichtle et al. [7] divided the image region into changed and unchanged buildings only and were unable to find out the changes in individual roof planes.
Raw LiDAR data or LiDAR-derived Digital Surface Model (DSM) is also used as the only source of information to detect the 3-D building changes [6,23]. Tran et al. [6] proposed a method where, in addition to the ground and tree, they classified buildings into new, demolished and unchanged types. Teo and Shih [23] extracted changed building regions by applying the height threshold and morphological filter to nDSM (normalised DSM) of two different days. However, this method missed many small planes because of inadequate selection of window size of the filter. In addition, this method was unable to detect partially-modified building planes.
There are also methods that use a segmentation technique for building change detection from both LiDAR and images [1,24]. For instance, Awrangjeb [1] generated a building mask by extracting the building boundaries from non-ground LiDAR data. Later, the building boundaries were refined manually using visual analysis of the aerial image. This method could detect all possible changes in a building such as new, demolished, unchanged, modified, and partially-modified buildings.
In addition, there are also methods that detect the changes in a building using 3-D building models. These methods compare the height information of individual buildings and detect the height changes in the buildings for change detection. A 3-D building model can be generated from LiDAR data by using thresholding method [25] or from stereo-images by using a least square matching process [26,27]. For example, Chen et al. [25] used LiDAR-based building models at two different dates to detect building changes. The non-building portion of the models was removed by using a height threshold and NDVI (Normalized Difference Vegetation Index) analysis. However, small changes in buildings and occluded buildings were also removed from the models and were not detected. Therefore, this method had up to 96% of pixel-based accuracy. Stal et al. [26] and Qin [27] methods compare the stereo-images- and LiDAR-based building models to detect building changes. In the method of Stal et al. [26], a morphological filter and NDVI analysis were applied to remove the unwanted regions, while Qin [27] used three more filters, such as shadow index, noise filter, and irregular structure filter. However, the methods that use 3-D building models need several pre-defined parameters for accurate detection of building changes. In addition, modified and partially-modified building planes were not detected by these methods.

3. Challenges and Contributions

The existing 3-D building modelling techniques differ significantly, based on the primitive shapes used and the input data sources [28]. They often impose constraints on the minimum footprint size and positional accuracy values for reconstruction of specific models at a certain level of detail [29]. Although many of the existing approaches have demonstrated promising results in building reconstruction, there are still a number of issues to be improved. For instance, building roofs are mostly disconnected after segmentation process, causing difficulty in determining the neighbourhood relationships among the roof planes. Furthermore, locating step edges from LiDAR data only is also hard and often requires additional information or constraints. In addition, the approximation of roof patches, which are generally missed because of the low density of the LiDAR data, requires the operator to make assumptions and often produces high reconstruction errors.
To resolve the above issues, this research introduces a data-driven 3-D reconstruction technique that constructs buildings represented at lower levels with coarse boundaries (3-D roof planes) to the higher levels (3-D building models). The proposed reconstruction technique receives building roof planes (extracted from LiDAR point could data by any roof plane extraction method such as Awrangjeb and Fraser [9] and Mongus et al. [4]) as inputs and offers the following contributions:
  • Insertion of missing planes: A missing plane can be a small plane from where the number of reflected laser points is limited, possibly due to a low point density. It can also be due to a height jump between planes. A slanted plane is grown if there is an existence of unsegmented LiDAR points between any two planes. Otherwise, a vertical plane is inserted between the planes.
  • Reconstruction of complete building models: When there are missing planes, the topological relationship among the roof planes is incorrect. Thus, an adjacency matrix that defines the topological relationships among the input roof planes of a building is first constructed. Then, the matrix is updated (i.e., the topological relationship is corrected) based on the inserted missing planes. Finally, the building model is generated using the correct topological relationship and the revised intersection lines among the inserted missing planes and the input roof planes.
Generally, the building change detection step is applied after the building detection or 3-D building modelling stage to update a map database [1]. The update to the map database is largely dependent on visual interpretation, estimation of numerous parameters, and human interaction. Consequently, it is a time-consuming procedure to analyse the changes. In addition, most of the existing methods mainly focus on detecting the 2-D changes in a building and, thus, are unable to distinguish the specific height change in individual planes. Moreover, majority of the existing 3-D change detection methods, which are based on height difference in the DSM, classify the detected changes into three groups: new, unchanged and demolished planes [23,25,30]. Consequently, these methods are unable to distinguish the partially-modified planes from the unchanged and new planes, and modified planes from the new planes. Therefore, the contribution of the proposed building change detection technique can be summarised as follows:
• Change detection: Automatic detection of 3-D (2-D space and height) changes in buildings and their planes into five groups: new, unchanged, demolished, modified, and partially-modified planes. Unlike the existing methods, the proposed method is capable of detecting changes on a per-plane basis. A newly proposed graphical representation of 3-D building models helps in identification of modified and partially-modified planes.
Since in reality only a small number of buildings are changed in an area, the proposed change detection technique first uses height difference values between the reference and test models to identify new, completely demolished and unchanged buildings. The planes of these buildings are denoted as new, demolished and unchanged, respectively. Then, only the modified building regions are compared using a graph-based representation of the reference and test models to obtain other changes such as modified and partially-modified planes.

4. Proposed 3-D Building Modelling Technique

Figure 1 shows the workflow of the proposed 3-D building modelling technique. Awrangjeb and Fraser’s [9] technique is used for the extraction of building roof planes using LiDAR data and the corresponding Digital Terrain Model (DTM) as the input. The used technique offers high detection performance but has low accuracy in extracting small roof planes and tiny structures since it uses only the LiDAR data. This section presents the different steps in Figure 1 in detail for generation of 3-D building models.

4.1. Adjacency of Roof Planes and Their Intersection Lines

The primary elements for the generation of building models are the roof planes which are input to the proposed technique. A neighbourhood relation matrix is established among the roof planes to determine the topological relations among these 3-D roof primitives, which is originally an adjacency matrix and stores the records of the neighbours of each roof plane.
Let S p = { P 1 , P 2 , , P n } be a set of n input roof planes and an adjacency matrix M of the same size is instantiated, i.e., M n × n . The roof planes that remain within the Euclidean distance d i s t p of a source plane P i are considered its neighbours and the corresponding rows and columns of M are updated accordingly with the roof plane’s ID. The value of d i s t p is chosen as twice the maximum distance ( d m a x ) of a point to its nearest point in the input LiDAR point cloud [9]. To speed up the generation of M, the planes within an appropriate rectangular region (e.g., a bit larger than the bounding box) around the source plane P i are first determined, rather than computing the distances of a plane P i to the rest of the input roof planes. Next, for the planes which lie within d i s t p of the boundary points of P i , their particular records against the ith row and column of M are updated. The procedure continues and all the input roof planes are processed iteratively to establish the interrelations among them.
Then, to find the intersection line between two adjacent planes P 1 and P 2 , their plane equations are used to assess whether these planes mutually intersect in 3-D space. If they do, a point called the intersection point I p n t and a direction vector n ^ in 3-D space are obtained. The two end points of the intersection line are not known from I p n t and n ^ alone. Subsequently, 2-D straight lines are first approximated using the roof boundary points which face each other. The MATLAB built-in function polyfit is used for the approximation of line segments. Following the concepts of 3-D coordinate geometry and using the approximated 2-D lines, I p n t , and n ^ together, the 3-D intersection lines between the adjacent roof planes are estimated.
The position of I p n t from the two corresponding plane boundaries is considered, entirely according to the resolution of the input LiDAR data, to be d i s t p = 2 d m a x . The plane insertion procedure first assesses the nearest distances of I p n t from two intersecting plane boundaries. If any of the two nearest distances exceeds 2 d m a x , it will potentially indicate either of the two general possibilities: (1) a missing LiDAR-based plane; or (2) a missing vertical plane, between these adjacent roof planes. Both o possible scenarios are described using diagrams in the following sections.

4.2. Detection and Insertion of Missing Roof Planes

The proposed modelling technique now iteratively takes a plane and its neighbours to approximate their intersection lines. However, if the position of I p n t is away (more than 2 d m a x ) from the two intersecting plane boundaries, the plane insertion process attempts to search for any unsegmented LiDAR points between the participating roof planes. If such points are found (at least 4), the process infers the presence of a plane between these roof planes. Figure 2a,b shows an example building and a small missing plane among already extracted planes P 1 , P 2 , and P 3 .
The process invokes the region-growing segmentation technique in Awrangjeb [9] to extract a planar region P n using the unsegmented LiDAR points (see the green points in Figure 2c). In addition to the available points, the segmentation process uses points of the neighbouring planes ( P 1 , P 2 , and P 3 ) for the extraction of a new plane P n assuming these points might have been added wrongly to the neighbouring planes because P n was missed earlier. Therefore, each iteration of the region-growing technique computes a plane-fitting error between new and neighbouring planes. If the new plane results in a height error smaller than those of the neighbouring planes’ errors, the LiDAR points of the neighbouring planes are removed from their respective regions and added to P n .
The segmentation process continues growing the region until it finds no points complying with the above height error criterion. After the segmentation process stops, the proposed technique estimates the boundary of the new plane and updates the boundary information of the neighbouring planes, as shown in Figure 2c. The process also updates the neighbourhood matrix M with the information on the new plane and new neighbouring relations. Subsequently, the intersection lines between the identified plane and the participating planes are estimated using their boundary points, as explained in Section 4.1. These intersection lines are also recorded against their adjacent roof planes. Figure 2d shows all the roof planes of the sample building and the intersection lines between them.

4.3. Insertion of Vertical Roof Planes

Figure 3a shows a building, where a small slanted plane is located between two adjacent roof planes labelled P 1 and P 2 . This plane was not detected by the involved segmentation technique due to the unavailability or absence of enough LiDAR points, as shown in Figure 3b. Therefore, a new plane P 4 between P 1 and P 2 is inserted as follows. The neighbouring boundary points of P 1 and P 2 are first used to form a vertical plane P 4 . Then, the intersection lines of P 4 with P 1 and P 2 , respectively, are determined. Figure 3c shows P 4 and its intersection lines with P 1 and P 2 . The procedure not only keeps track of new planes, but also maintains the correct neighbourhood information in M. Therefore, even after the insertion of P 4 , it is found through the neighbourhood selection method (described in Section 4.1) that P 4 has a new neighbouring plane, i.e., P 3 , as can be seen visually in Figure 3b,d. There is a dire need to determine the intersection between P 3 and P 4 to precisely establish the topological relationships among the building roofs and reconstruct a model with a good level of detail.
As shown in Figure 3, P 4 is actually a thin slanted (nearly vertical) plane. However, due to shortage of points on this plane, it could not be inserted using LiDAR data following the procedure in Section 4.2. Since a vertical plane is inserted instead, the intersection line between P 3 and P 4 cannot be found as expected. To solve this, the plane insertion procedure is executed following the similar steps above and a new vertical plane P 5 is inserted between P 3 and P 4 using their boundary points. The procedure further approximates the intersection lines between the participating roof planes, between P 3 and P 5 and between P 5 and P 4 , as shown in Figure 3e. The procedure stops once all the roof planes are processed. Figure 3f shows all the building roof planes and their intersection lines in 3-D space for better visualisation.
Thereafter, the boundary of each building is extracted using the boundary tracing procedure in Ali et al. [31] and regularised using the technique proposed by Awrangjeb [32] to form an appropriate building footprint, which is a polygon consisting of 3-D corner points.

4.4. Rooftop Topology and Modelling

At this juncture, the information about the buildings, all their possible roof planes, intersection lines, and the adjacency relationship is available. For 3-D roof modelling, it is important to obtain 3-D ridge (intersection of ridge lines) and edge (intersection of ridge line and building boundary) points. However, as shown in Figure 4a, there is a small gap between each end point of a ridge line and its corresponding (actual) ridge or edge points. To fill the gaps and establish connectivity among the adjacent roof planes, the adjacency matrix M is used along with the Roof Topology Graph (RTG) following the principles proposed by Verma et al. [33]. As shown in Figure 4b, each roof plane is represented as a vertex in RTG and two adjacent planes are connected through an edge. These roof planes are labelled with their vertex numbers in Figure 4a.
In the context of RTG, a basic cycle indicates a ridge point that belongs to several ridge lines [34]. For instance, the roof planes P 1 , P 4 , and P 5 form a basic cycle and the intersection of the corresponding ridge lines determines a ridge intersection point. In addition, the corresponding vertices of the ridge lines participating in the intersection determination process are updated. These points can also be referred as ridge points, and will be used at the later stage to approximate the model shape. An RTG can also be represented as a composition of several basic cycles, as shown in Figure 4c. The building rooftop shown in Figure 4a has six basic cycles and so do the ridge intersection points. The least squares approach is applied to approximate the intersection among the ridge lines of the participating roof planes, as shown in Figure 4d. Thereafter, 3-D edge points are found by intersecting ridge lines with the building boundary. Figure 5a,b shows the edge points in two different perspective views. Note that, in a real scenario, two or more ridge lines intersect at the same point on the building boundary. However, the intersection of these ridge lines and the building boundary in the proposed modelling method may generate two or more individual points on the boundary. A 3-D single intersection point is not estimated for these points, since such an estimated 3-D single point may not be on the building boundary. Hence, individual edge points are considered.

4.5. Complete 3-D Building Models

For roof modelling, each building is processed separately, and the procedure first finds the 3-D points around each plane boundary and constructs each roof segment. To do this, 3-D intersection lines, whose junction points (red ovals in Figure 6) have been updated, are recalled. Then, the junction points (edge or ridge points) of 3-D intersection lines are re-ordered in succession around the plane using the information on the corresponding LiDAR-based building boundary points. This is shown in Figure 6a, where the junction points are labelled as N 1 N 6 to represent a roof segment. All the roof planes of each building are processed iteratively and the corresponding roof model is generated that has regularised plane boundaries, as shown in Figure 6b. Note that during the above modelling process (e.g., using least squares to approximate the intersection among the ridge lines), the planarity of the roof segment, say using the 3-D junction points N 1 N 6 , may be slightly changed with respect to the original plane equation generated from the segmented LiDAR points. However, the plane equation is not updated since it is estimated by a large number of LiDAR points on the plane.
For a complete 3-D building model, it is necessary to generate walls from the periphery of the roof model to its floor. In this regard, the edge points are used to generate the approximate building floor first. The ground height of each edge point is determined from the DTM so that the model seems to be a replica of its respective real building. All the consecutive ground points are connected to obtain the building floor. Finally, the building walls are determined by extruding the edge points to their corresponding floor points. Figure 7 presents the real building and its 3-D reconstructed model, where the walls are represented in a transparent grey colour.

5. Proposed 3-D Building Change Detection Method

Any building change detection method requires two sets of building models: a reference set and a test model set. The reference dataset is collected at an earlier date than that of the test dataset. For our investigation, however, datasets from two different dates were not available. Therefore, the 3-D models generated from the available dataset are considered to be in the reference model set and a model modification step is carried out to generate the test model set.
Figure 8 shows the flow diagram of the proposed building change detection method. The inputs to the proposed method are aerial images, LiDAR, DTM, and extracted 3-D building (reference) models. The proposed method has three major steps: (i) test model generation; (ii) creation of building data structure; and (iii) automatic change detection. The test model generation step is a manual step. If the test model set is generated from an available dataset captured at a later date than that of the reference dataset, then this step is not necessary.
To identify the changes, the reference and test models are represented in a graph-based data structure. Thereafter, they can be compared in the automatic change detection step. However, since the number of actual changes in buildings is small in practice, such a model-by-model comparison will be unnecessarily time consuming. Therefore, the height information of the test and reference models are first used to identify potential change locations. Then, the models are compared, if necessary, using the 3-D representations. Consequently, the planes from both models are classified into five groups: unchanged, new, modified, demolished and partially-modified planes.

5.1. Test Model Generation

The building models generated by the proposed 3-D building modelling technique are considered as reference models. To obtain test models, the size and height changes are introduced to the reference models. Among the input data of the proposed change detection technique (see Figure 8), while the aerial image contains sharp boundaries of objects including the buildings, the DTM and plane equation contain the bare earth and height information of buildings, respectively. Therefore, the DTM, plane equation, and aerial images are collectively used to produce changes in the reference data.
The changes in the reference data are performed by selecting points around a plane’s boundary in the aerial image and import X (Easting) and Y (Northing) coordinates of that location. This procedure is also shown in Figure 9. Based on X and Y coordinates of a point, its Z (Height) value is approximated by using the respective plane equation. For the wall planes, Z value at a ground point of the wall plane is directly extracted from the DTM. The extracted information of planes is later used to modify the building model of the reference data. For example in Figure 10, five different changes are made to the reference models: addition of height to two buildings, addition of a veranda to a building, removal of verandas from two buildings, addition of a new building, and relocation of all buildings in the scene.

5.2. 3-D Model Representation

Unlike the existing methods, the proposed 3-D change detection method not only obtains building changes on a per-plane basis, but also detects modified and partially-modified planes. A graph-based 3-D data structure, where individual planes and their relationships are represented, is proposed to detect modified and partially-modified planes. The relationship between planes helps identification of different types of planes (e.g., roof and wall planes) and relative position of a plane with respect to its neighbouring planes. In addition, this ensures the detection of wall planes that are usually undetected by analysing only height change.
The 3-D building model consists of roof and wall planes. The relationship between the connected roof planes (indicated by the adjacency matrix M) can either be a parent-to-child, a parent-to-parent, or a child-to-child connection. The parent-to-parent and child-to-child are also called sibling connections. All connected roof planes are initially labelled as siblings. Then, the parent-to-child connection of two connected roof planes is labelled by verifying the two conditions: (1) the child and parent planes intersect each other; and (2) the child plane has a height value lower than the parent. Whereas the relationship between a roof plane and a wall plane is a parent-to-child connection, the relationship between two neighbouring wall planes is a child-to-child connection. Parents that intersect each other are in a parent-to-parent connection. For instance, in Figure 11, the parent-to-child, parent-to-parent, and child-to-child connections are marked by black, red and magenta arrows, respectively, for the building shown in the orange coloured rectangle in Figure 10a.
Thereafter, a 3-D building model is represented in a graphical structure, where each plane of a building is considered as a separate node. A node contains the complete information of a plane, e.g., plane ID, plane equation, 3-D plane points (i.e., polygon), plane type (i.e., roof or wall plane), and its connected plane information. Figure 12 shows an example to illustrate the data structure of a 3-D building model shown within the orange rectangle in Figure 10a. The roof and wall planes are represented by blue and red nodes, respectively. As complete information of a node is not possible to mention in the data tree, the plane ID is shown at each node. The connections of nodes can either be sibling (parent-to-parent and child-to-child connections) or parent-to-child, which are highlighted by black and green coloured edges, respectively. The arrowhead of a green edge indicates the child in the parent-to-child relationship.

5.3. Automatic Building Change Detection

The proposed change detection method measures the change between the 3-D models of the reference and test scenes by comparing their structural information. The full structural comparison using the graphical representation, shown in Figure 12, between the two corresponding building models B r and B t , from the reference and test model sets, respectively, is unnecessary for the following reasons. Firstly, the full comparison of all corresponding building pairs ( B r , B t ) is computationally expensive. Secondly, in practice, there will be no or only a small number of existing buildings being changed in a given test scene. Therefore, the building structure of both models are first compared based on the height difference. Then, if necessary, only the related parts of the graph data structures are compared to obtain a more specific types and details of the involved changes. By using these two tests, the proposed method classifies the building planes into five groups: unchanged, demolished, new, modified, and partially-modified planes.

5.3.1. Height Test

For the height test, two intensity images are generated: one for the reference scene and the other for the test scene. Each intensity image has a resolution of 0.25 m and represents heights from individual roof planes with respect to the ground. Thus, a zero height represents no buildings. The input LiDAR points within the segmented planes from [9] are used to generate the height image. If the LiDAR points are not available, then the individual plane equations can be used to estimate heights on each pixel of the height image. In the height test, the height intensity image (say, I r ) of the reference scene is subtracted from that (say, I t ) of the test scene. Figure 13c shows the absolute (pixel-to-pixel) height difference image ( I d = I t I r ) for I r and I t in Figure 13a,b. In I d , there can be the following cases for buildings in the scene (see Figure 13). Case A: For a completely new building, all height differences are positive in I d and there is only zero height value in I r . All planes within this new building are marked as new. Case B: Likewise, for a completely demolished building, all height differences are negative in I d and there is only zero height value in I t . All planes within this demolished building are marked as demolished. Case C: If there are no or negligible height differences everywhere within a building region, then all of its planes are marked as unchanged. In reality, however, if the two models of an unchanged building are generated from two datasets obtained at different times, the final models can be slightly different. Therefore, a height tolerance threshold (0.5 m) is used. In addition, there may be non-overlapping thin parts (due to high height difference) along the building boundaries, this should not be more than 1 m in width for an unchanged building. Otherwise, the building is considered modified and Cases D and E below are considered.
If the above three cases (Cases AC) from I d are excluded, the remaining regions are all modified building regions in the scene. Figure 13d shows the image I m containing only modified regions after exclusion of completely new, demolished and unchanged buildings from I d in Figure 13c. In I m , there can be the following cases for modified buildings in the scene. Case D: A building is modified through removing one or more parts, e.g., a veranda is removed, which is shown within the orange rectangles in Figure 10a,d. This case is observed by the same absolute height values within the corresponding modified region in I m and I r , but zero height value in I t . Case E: A building is modified through extending one or more parts, e.g., a veranda is added or extended. This case is observed by the same absolute height values within the corresponding modified region in I m and I t , but zero height value in I r (see Figure 13d). Case F: A building is modified in height direction, e.g., a one-storey building is modified to a two-storey building, or vice versa (see Figure 13d). For these three cases, all the unchanged areas, if any, are identified and planes within these areas are marked unchanged (the same procedure is followed as in Case C above). Consequently, only the planes in each modified region in I m are now subject to the plane test below exploiting the graphical representation in Figure 12.

5.3.2. Plane Test

Let the two graphical representations of B r and B t be G r and G t , respectively. A removal of a building part (Case D) is a result of a full and/or partially demolishment of one or more existing building planes. In this situation, the partially-demolished planes are present in both G r and G t , but fully demolished planes are only present in G r . In contrast, an extension to an existing building (Case E) consists of an addition of one or more completely new planes and/or an extension of one or more existing planes. In such a case, the extended planes are present in both G r and G t , but completely new planes are only present in G t . When a building is modified in height direction (Case F), in addition to the height change, the modification can also include Cases D and E. Therefore, Cases D and E can be considered as minor modifications in buildings and Case F is a major modification. Each of the planes within a modified region can be classified as a new, demolished, modified or partially-modified plane. The proposed plane test tries to establish correspondences between the planes (from G r and G t ) by applying the point-in-polygon (PIP) test [12]. A plane in G t is marked as new if a corresponding plane is not found in G r (see the top-right corner of Figure 13d). A plane in G r is marked as demolished if a corresponding plane is not found in G t . For instance, for removal of the veranda in Figure 10, planes within the orange coloured rectangle in Figure 12 will be absent in G t . Thus, no correspondences are established for new and demolished planes. Nevertheless, for a fully or partially modified plane, a correspondence can be found through the PIP test.
To differentiate between a fully modified and partially-modified reference planes, the height differences in I m are again used. A reference plane is fully modified when there are height changes everywhere in the plane. This reference plane and its corresponding plane in G t are marked as modified (see the mid-right side building in Figure 13d). Otherwise, the reference plane is a partially-modified plane when there are height changes in some areas and no height changes in rest of the plane. Both the reference plane and its corresponding plane in G t are marked as partially-modified (see the building at the bottom in Figure 13d).
All these groups of planes in the reference and the test building models are shown in Figure 14, where the unchanged, demolished, new, modified, and partially-modified planes are marked by yellow, green, red, cyan, and blue colours, respectively.

6. Performance Study

The proposed 3-D building modelling technique requires only point cloud data for individual planes and the corresponding DTM. The proposed change detection technique needs datasets, which are from the same area but collected at two well separated dates, reflecting some real building changes as illustrated in Section 5.1. However, it is hard to obtain such datasets publicly. As a result, two Australian datasets, which have high density point cloud data, high resolution multi-spectral orthophotos, and DTMs, were used. Since these datasets are available for one date only, for verification of the 3-D change detection technique, tests models were manually generated. In this section, results for 3-D building modelling and change detection techniques are separately presented.

6.1. Datasets

Two datasets namely, Aitkenvale (AV) and Hervey Bay (HB), as shown in Figure 15a and Figure 16a, were selected to evaluate the performance of our proposed building modelling and change detection methods. The AV site has a point density of 29.3 points per m2 and it covers an area of 108 × 80 m2. This dataset has five buildings and comparatively high vegetation. The HB site covers an area of 108 × 104 m2 and has a point density of 12 points per m2. It contains 26 single-storey residential buildings of different sizes that are surrounded by vegetation cut into different shapes. For both datasets, multi-spectral orthophotos of resolution 5 cm and 20 cm, respectively, were available. In addition, DTMs with 1 m resolution were used for both the sites.
Since the data were not available from two dates, for verification of the proposed change detection method, the above available datasets were exploited to generate the reference building models by using the proposed 3-D modelling technique. However, the test models were manually generated (see Section 5.1). In total, 62 changes (test sites) were made to the AV and HB reference models by introducing one or more of the following operations: addition of a new building, addition of a veranda, increasing height of a building, removal of a veranda, removal of a roof plane, removal of a building, rotation of a building, and change of the positions of three or five buildings (for details, see Table 1).

6.2. Evaluation System

To verify the performance of the proposed building modelling and change detection method, a previously proposed automatic evaluation system [12] was employed. For given two sets of input data (i.e., reference and test objects), this evaluation system estimates a set of evaluation metrics without any human interaction.
For evaluation of the generated building models, there was an absence of the 3-D reference data (building models). In the literature, there is also a lack of appropriate evaluation metrics for 3-D models. Thus, it is hard to make a proper evaluation for the generated 3-D models. Earlier, the extracted roof planes and building boundaries were evaluated against the 2-D reference data that were collected through monoscopic image measurement [9]. The proposed building modelling method uses those extracted planes for generation of complete 3-D models. These input (extracted) planes consist of raw LiDAR points, thus are incomplete and have zigzag boundaries. The proposed modelling technique inserts possible missing planes on the roof and finds the plane boundaries using plane intersection lines. Consequently, the 2-D reference data from Awrangjeb and Fraser [9] were used to evaluate the planes in the generated 3-D models. In addition, since the main contribution of the proposed 3-D modelling method is to reconstruct missing planes, it is also shown how many of missing planes the proposed method successfully inserted.
The same limitation, i.e., the absence of actual 3-D reference data from two dates and the lack of appropriate evaluation metrics for 3-D changes, has been observed for evaluation of 3-D change detection performance. As a result, the reference (generated from the data of an earlier date) and test (generated from the data of a later date) models were directly compared to evaluate the change detection performance. Changes between these two sets of models were exploited to verify the changes detected by the proposed automatic change detection technique.
Mainly two categories of evaluation metrics, i.e., object-based and pixel-based, are used for 2-D evaluation of building models and changes. In object-based evaluation, completeness ( C m ), correctness ( C r ), and quality ( Q l ) metrics are estimated by counting the number of objects, whereas in pixel-based evaluation completeness ( C m p ), correctness ( C r p ) and quality ( Q l p ) are calculated by counting the number of pixels in the objects. In building model evaluation, the root-mean-square-error (RMSE) is also used in both planimetric (2-D space) and height directions to evaluate the geometric accuracy. In addition, reference cross-lap, detection cross-lap, area commission and omission errors are used to indicate segmentation errors.
The detail about the above evaluation metrics can be found in Awrangjeb and Fraser [12]. In addition to quantitative results, qualitative analysis is also presented via visualisation.

6.3. Parameter Setting

There are limited parameters used by the proposed building modelling and change detection techniques. Most of the parameter values were chosen from the existing literature. For example, the distance to find neighbouring planes or neighbouring LiDAR points ( d i s t p = 2 d m a x ), height image resolution (0.25 m), minimum width of a thin (unchanged) region (1 m), plane fitting error (0.10 m) and plane height error (0.15 m) are from Siddiqui et al. [35], Awrangjeb [1] and Awrangjeb and Fraser [9], and the Gaussian smoothing scale ( σ = 3 ) is from Awrangjeb et al. [36].
In the proposed change detection technique, a height tolerance and a distance thresholds are also used. Since reference models have been used to generate test models, there was no height difference for unchanged planes. However, due to error in the LiDAR data (which are collected using the same or different systems on two different dates), there may still be some height differences for unchanged planes and buildings. The value of the height threshold could be set at 0.5 m allowing the maximum error in the LiDAR data. The value of the distance threshold is set at d i s t p = 2 d m a x , which is the minimum length and width of an overlap, changed or unchanged area.

6.4. 3-D Building Modelling Results

The proposed geometric modelling technique relies entirely on LiDAR data, while images are used in this article for visualisation. The performance in terms of insertion of missing planes is presented in Table 2. The 3-D model generation results on the two test datasets AV and HB are presented quantitatively in Table 3 and qualitatively in Figure 15 and Figure 16, respectively.

6.4.1. Quantitative Results for 3-D Reconstruction

Table 2 shows the performance of the proposed 3-D modelling method in insertion of missing planes. In both test cases, since the reference and input plane sets do not include the wall planes, they are not counted here. In addition, the reference and input plane sets do not include vertical planes in real height jumps (e.g., between a main building and a connected veranda). However, these vertical planes in the height jumps are required to reconstruct the building models. Therefore, the vertical planes inserted for height jumps as well as for missing small slanted planes (e.g., see Figure 3) are counted here.
For the AV dataset, the proposed modelling method successfully extracted the only missing slanted roof plane using the unsegmented LiDAR points (see Figure 2). In addition, four vertical planes were inserted in height jumps between the main buildings and verandas. For the HB dataset, to fill the 20 missing planes, it inserted seven LiDAR-based planes and four vertical planes, including one shown in Figure 3. Therefore, while there are no more missing planes in the AV dataset, there are still nine missing planes in the HB dataset. All these still missing planes are small in size, sometimes less than 1 m2 in area, and, therefore, could not be recognised and inserted (see Section 6.4.3 below for further discussion).

6.4.2. Comparative Results

It is hard to compare the results of different 3-D building modelling methods. Firstly, the approaches that the 3-D building modelling methods adopt are different, for instance, model-driven and data-driven approaches. Secondly, different methods are evaluated using different datasets, which vary in input point density and complexity of buildings. Therefore, it is hard to find an appropriate method for comparisons. For example, Xiong et al. [20] presented a model-driven method that uses a set of pre-defined building models. It is evaluated using two datasets: the ISPRS [14] and Enschede. While the point density in the ISPRS dataset is 4–7 points/m2, in the Enschede dataset it is 20 points/m2. However, the Enschede dataset includes complex buildings with non-planar surfaces. Since the method proposed in this paper is a data-driven method that works on high density point cloud data comprising buildings with planar surfaces only, it may not be fair to compare it with Xiong et al. [20]. In the experimentation, the proposed method was tested against the ISPRS dataset, but it did not work well. Consequently, the proposed method is compared with Awrangjeb and Fraser [9].
For the AV dataset, per-plane statistics in Table 3 show that the proposed modelling technique achieved 100% object-based completeness ( C m ), correctness ( C r ), and quality ( Q l ), indicating 4.35% increase in object-based accuracy from that of input roof planes. This is primarily because of the insertion of the missing roof planes. The proposed modelling technique has no detection cross-lap (under-segmentation) rate for the AV dataset because of the insertion of the new roof planes, where the statistics of the input roof planes showed under-segmentation errors with a detection cross-lap rate of C r d = 4.5 . In terms of pixel-based accuracy of the AV dataset, the evaluation results show a gradual increase in per-plane completeness ( C m p ), correctness ( C r p ), and quality ( Q l p ).
Table 3 shows that the proposed modelling technique has achieved 3–4% better C m , C r , and Q l for the HB dataset, and has a subsequent impact on the detection cross-lap rate, which is indicated by a decrease value of C r d from 8.7% to 7.9%. In contrast, C r r shows a slight increase in reference cross-lap rate, which is due to the insertion of (missing) vertical roof planes. Two area indices ( O e and C e errors) show better accuracy in terms of non-detected (omitted) area and incorrectly detected (committed) area between the input and reconstructed roof planes for both the datasets. In addition, Table 3 further indicates that the reconstructed roof planes have high planimetric and height accuracies.

6.4.3. Qualitative Analysis for 3-D Models

Visual inspection of Figure 15 and Figure 16 not only indicates the ability of the proposed modelling technique to reconstruct variably-shaped buildings but also validates its application for the development of complex building models. However, there were some modelling errors mainly as a result of missing small roof planes in the input planes. Figure 17 shows some of these errors in the magnified versions of buildings (labelled e–g in Figure 16d). These small planes were missed mainly because of under-segmentation errors by the involved segmentation technique [9] and lack of available LiDAR points, especially in the HB dataset, where the point density was low. The rectangles in Figure 17 show the buildings where the height discontinuities (step edges) were not extracted properly due to sparsity of the data points and under-detected sides of the roof planes. The ovals, however, show the locations where the proposed technique was unable to recover the intersection points because of small missing planes. These shortcomings can be overcome by using spectral features from the corresponding aerial imagery. For instance, information of lines extracted from images can be used with the LiDAR-approximated intersection lines to obtain accurate building models at low reconstruction errors.

6.5. 3-D Building Change Detection Results

The proposed change detection method classifies building planes into five groups: new, unchanged, changed, modified, and partially-modified planes. In this paper, there are 62 changes introduced to the two reference sites to test the performance of the proposed building change detection method. Both qualitative analysis and quantitative results are presented to show its performance.

6.5.1. Quantitative Results

Table 4 shows the change detection results for the thirty one AV test sites (see Table 1) in terms of the object- and pixel-based completeness, correctness, and quality. The proposed building change detection method achieved 100% object-based completeness, correctness, and quality for building planes larger than 10 m2. In addition, the proposed change detection method achieved 100% object-based completeness and over 95% object-based correctness and quality for all sizes of planes. Similarly, the pixel-based metrics for all the planes in the AV test sites are mostly greater than 90%.
The results on the thirty one HB test sites (see Table 1) are tabulated in Table 5. The proposed building change detection method achieved 100% object-based completeness, correctness, and quality for building planes larger than 50 m2. For all sizes of planes, our proposed method achieved more than 90% object-based correctness, 98% object-based completeness and 95% pixel-based correctness.
The high object- and pixel-based correctness and quality values for all modified sites of the dataset indicate that the proposed change detection method detects all kinds of changes in building roof planes. The proposed change detection method achieved a high pixel-based correctness, but it achieved low pixel-based completeness. As compared to the best-obtained results by existing change detection methods, i.e., 95.7% of overall completeness for all size planes [37] and 76.1% of overall correctness for all size planes [38], the proposed change detection method achieved 89.4% and 97.45% of overall completeness and correctness for all sizes of planes. However, this comparison may be unfair as different datasets were used in evaluation of these methods.

6.5.2. Qualitative Analysis

In this paper, 62 sets of changes have been made in the reference sites, AV(1)–(31) and HB(1)–(31), to evaluate the performance of the proposed building change detection method. Five test sites are used for visual demonstration, where two test sites of each reference site have the height changes in 2-D space, and the other three test sites of each reference site have changes in 2-D and/or 3-D spaces. In the first two test sites of the AV and HB reference sites, AV(17), AV(26), HB(17), and HB(26), the changes in building model are made in height, removal or addition of new verandas, addition of a new building, and relocation of three buildings. The other three test sites, i.e., AV(29)–(31) and HB(29)–(31), are obtained by rotation of a building, destruction of a plane, and introduction of a new building in the building model.
As shown in Figure 18 and Figure 19, the changes in the reference sites are accurately detected by the proposed change detection method. For example, in Figure 18b,c, and Figure 19b,c, the height change (in blue modified planes) are successfully detected by the proposed method. In addition, new veranda and new building planes (in red), demolished planes (in green), and unchanged planes (in yellow) are successfully detected from the building models of AV(17), AV(26), HB(17), and AV(26) sites. In the case of rotation of a building in Figure 18d and Figure 19d and addition of a new building at the demolished building in Figure 18e and Figure 19e, the modified planes (in blue), partially-modified planes (in cyan), demolished planes (in green), and unchanged planes (in yellow) are successfully detected by the proposed change detection method. In Figure 18f and Figure 19f, a few building planes are removed from the building models of AV(31) and HB(31) sites, but these demolished planes (in green) and unchanged planes (in yellow) are also accurately detected by proposed change detection method.

7. Conclusions

Here, the building modelling task is performed in an unsupervised and data-driven fashion. Unlike the model-driven techniques, the roof types are not restricted to a pre-existing model catalogue. The roof planes, which are not extracted due to low point density, noise, and/or the vertical nature of the structures, are hypothesised using the roof topology assumption. As part of the modelling process, interrelations and interconnections among the building roof planes are used for the reconstruction of building models. It was demonstrated that the buildings at higher levels of detail are reconstructed by using individual roof planes and their interconnections based on their spatial adjacency.
The proposed 3-D change detection technique first defines the plane connections into three types of relations: parent-to-child, parent-to-parent and child-to-child. Then, it represents each generated building model into a graph-based data structure. Since, in practice, there are only a small number of buildings being changed in a period of time, the height difference values between the reference and the test models are initially used to find new, completely demolished and unchanged buildings. The corresponding building planes of these buildings are marked as new, demolished and unchanged. Thereafter, for only the modified building regions, the reference and building models are compared using the graph data structure. The planes in the modified building regions are classified as new, unchanged, demolished, modified and partially-modified planes.
The performance study using two Australian datasets shows the high effectiveness for both the reconstruction and change detection techniques. The proposed reconstruction method successfully inserted 57% (12 out of 21) of missing roof planes. The remaining nine planes that were still missing are from the HB dataset. This dataset has a low input point density (12 point/m2) and contains buildings with complex roof structures. In addition, the missing planes are mainly small in size (i.e., less than 1 m2 in area). Moreover, compared with Awrangjeb and Fraser [9], the proposed reconstruction method has shown 3–5% better performance in terms of object-based completeness, correctness and quality. The proposed change detection method has shown 100% completeness, correctness and quality values on both datasets for planes more than 50 m2 in area. However, when the minimum plane size was set to 10 m2, these values drop to between 95% and 100% for the HB dataset. When all planes were considered, the correctness value drops to 95% for the AV dataset and the completeness value to 90% for the HB dataset. This indicates that it is harder to detect changes for planes smaller in size.
The output of the proposed building modelling and 3-D change detection techniques can be exploited to semiautomatically create a new or update an existing map database. A graphical user interface (GUI) similar to the one presented by Awrangjeb [1] can be used to quickly rectify the building models, if there are errors, and then store as a new building database. The same GUI can be used to indicate the changes in buildings by overlaying a new building database over an old one. A user can quickly accept or reject the indicative changes to update the database. As in reality the number of changes is small, such an semiautomatic update of the 3-D map database is cost-effective and can be scaled up to a large geographic area.
However, the proposed building reconstruction technique was found ineffective when the input point cloud data to the roof plane extraction technique (e.g., Awrangjeb and Fraser [9]) were low in density (less than 10 points/m2). Therefore, when the proposed 3-D reconstruction method was applied to the ISPRS benchmark dataset [14], it was not found to work well. When there are small planes or multiple missing planes which are neighbours of one another, the proposed technique fails to generate and insert the appropriate missing planes. In addition, in some particular situations, for example, when buildings have a non-planar roof component or a pyramid-hip roof, the proposed reconstruction procedure will not work well. Future work will include the investigation of a new 3-D roof reconstruction technique that can generate accurate high level building models with complex building roof structures and even using low density point cloud data.

Author Contributions

All three authors carried out the experiments and wrote the paper; S.A.N.G. mainly contributed in building reconstruction part; F.U.S. worked in change detection part; M.A. worked in both parts and oversaw all the activities.

Funding

This work was supported by the Australian Research Council under Grant DE120101778.

Acknowledgments

The Aitkenvale and Hervey Bay datasets were provided by Ergon Energy (www.ergon.com.au) in QLD, Australia.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Awrangjeb, M. Effective generation and update of a building map database through automatic building change detection from LiDAR point cloud data. Remote Sens. 2015, 7, 14119–14150. [Google Scholar] [CrossRef]
  2. Zolanvari, S.M.I.; Laefer, D.F.; Natanzi, A.S. Three-dimensional building façade segmentation and opening area detection from point clouds. ISPRS J. Photogramm. Remote Sens. 2018, in press. [Google Scholar] [CrossRef]
  3. Aljumaily, H.; Laefer, D.F.; Cuadra, D. Big-Data Approach for Three-Dimensional Building Extraction from Aerial Laser Scanning. J. Comput. Civil Eng. 2016, 30, 04015049. [Google Scholar] [CrossRef]
  4. Mongus, D.; Lukač, N.; Žalik, B. Ground and building extraction from LiDAR data based on differential morphological profiles and locally fitted surfaces. ISPRS J. Photogramm. Remote Sens. 2014, 93, 145–156. [Google Scholar] [CrossRef]
  5. Bizjak, M. The segmentation of a point cloud using locally fitted surfaces. In Proceedings of the 18th Mediterranean Electrotechnical Conference (MELECON), Lemesos, Cyprus, 18–20 April 2016; pp. 1–6. [Google Scholar]
  6. Tran, T.H.G.; Ressl, C.; Pfeifer, N. Integrated Change Detection and Classification in Urban Areas Based on Airborne Laser Scanning Point Clouds. Sensors 2018, 18, 448. [Google Scholar] [CrossRef] [PubMed]
  7. Leichtle, T.; GeiB, C.; Wurm, M.; Lakes, T.; Taubenböck, H. Unsupervised change detection in VHR remote sensing imagery—An object-based clustering approach in a dynamic urban environment. Int. J. Appl. Earth Obs. Geoinf. 2017, 54, 15–27. [Google Scholar] [CrossRef]
  8. Gu, W.; Lv, Z.; Hao, M. Change detection method for remote sensing images based on an improved Markov random field. Multimedia Tools Appl. 2017, 76, 17719–17734. [Google Scholar] [CrossRef]
  9. Awrangjeb, M.; Fraser, C.S. Automatic Segmentation of Raw LiDAR Data for Extraction of Building Roofs. Remote Sens. 2014, 6, 3716–3751. [Google Scholar] [CrossRef]
  10. Kim, K.; Shan, J. Building roof modeling from airborne laser scanning data based on level set approach. ISPRS J. Photogramm. Remote Sens. 2011, 66, 484–497. [Google Scholar] [CrossRef]
  11. Truong-Hong, L.; Laefer, D.F. Quantitative evaluation strategies for urban 3D model generation from remote sensing data. Comput. Graph. 2015, 49, 82–91. [Google Scholar] [CrossRef] [Green Version]
  12. Awrangjeb, M.; Fraser, C.S. An automatic and threshold-free performance evaluation system for building extraction techniques from airborne LIDAR data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2014, 7, 4184–4198. [Google Scholar] [CrossRef]
  13. Vo, A.V.; Truong-Hong, L.; Laefer, D.F.; Tiede, D.; d’Oleire Oltmanns, S.; Baraldi, A.; Shimoni, M.; Moser, G.; Tuia, D. Processing of Extremely High Resolution LiDAR and RGB Data: Outcome of the 2015 IEEE GRSS Data Fusion Contest—Part B: 3-D Contest. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 5560–5575. [Google Scholar] [CrossRef]
  14. Rottensteiner, F.; Sohn, G.; Gerke, M.; Wegner, J.D.; Breitkopf, U.; Jung, J. Results of the ISPRS benchmark on urban object detection and 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 2014, 93, 256–271. [Google Scholar] [CrossRef]
  15. Li, Y.; Wu, H. An improved building boundary extraction algorithm based on fusion of optical imagery and LiDAR Data. Int. J. Light Electron Opt. 2013, 124, 5357–5362. [Google Scholar] [CrossRef]
  16. Awrangjeb, M.; Zhang, C.; Fraser, C.S. Automatic extraction of building roofs using LiDAR data and multispectral imagery. ISPRS J. Photogramm. Remote Sens. 2013, 83, 1–18. [Google Scholar] [CrossRef]
  17. Habib, A.; Kwak, E.; Al-Durgham, M. Model-based automatic 3D building model generation by integrating lidar and aerial images. Arch. Photogramm. Cartogr. Remote Sens. 2011, 22, 187–200. [Google Scholar]
  18. Cao, R.; Zhang, Y.; Liu, X.; Zhao, Z. 3D building roof reconstruction from airborne LiDAR point clouds: A framework based on a spatial database. Int. J. Geograph. Inf. Sci. 2017, 31, 1359–1380. [Google Scholar] [CrossRef]
  19. Oude Elberink, S.; Vosselman, G. Building Reconstruction by Target Based Graph Matching on Incomplete Laser Data: Analysis and Limitations. Remote Sens. 2009, 9, 6101–6118. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  20. Xiong, B.; Oude Elberink, S.; Vosselman, G. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds. ISPRS J. Photogramm. Remote Sens. 2014, 93, 227–242. [Google Scholar] [CrossRef]
  21. Jung, J.; Jwa, Y.; Sohn, G. Implicit Regularization for Reconstructing 3D Building Rooftop Models Using Airborne LiDAR Data. Sensors 2017, 17, 621. [Google Scholar] [CrossRef] [PubMed]
  22. Wu, B.; Yu, B.; Wu, Q.; Yao, S.; Zhao, F.; Mao, W.; Wu, J. A graph-based approach for 3D building model reconstruction from airborne LiDAR point clouds. Remote Sens. 2017, 9, 92. [Google Scholar] [CrossRef]
  23. Teo, T.A.; Shih, T.Y. Lidar-based change detection and change-type determination in urban areas. Int. J. Remote Sens. 2013, 34, 968–981. [Google Scholar] [CrossRef]
  24. Tian, J.; Cui, S.; Reinartz, P. Building Change Detection Based on Satellite Stereo Imagery and Digital Surface Models. IEEE Trans. Geosci. Remote Sens. 2014, 52, 406–417. [Google Scholar] [CrossRef] [Green Version]
  25. Chen, L.C.; Huang, C.Y.; Teo, T.A. Multi-type change detection of building models by integrating spatial and spectral information. Int. J. Remote Sens. 2012, 33, 1655–1681. [Google Scholar] [CrossRef]
  26. Stal, C.; Tack, F.; De Maeyer, P.; De Wulf, A.; Goossens, R. Airborne photogrammetry and lidar for DSM extraction and 3D change detection over an urban area—A comparative study. Int. J. Remote Sens. 2013, 34, 1087–1110. [Google Scholar] [CrossRef] [Green Version]
  27. Qin, R. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 179–192. [Google Scholar] [CrossRef]
  28. Sohn, G.; Huang, X.; Tao, V. A data-driven method for modeling 3D building objects using a binary space partitioning tree. In Topographic Laser Ranging and Scanning: Principles and Processing; CRC Press: Boca Raton, FL, USA, 2009; pp. 479–509. [Google Scholar]
  29. Kolbe, T.H.; Gröger, G.; Plümer, L. CityGML—Interoperable access to 3D city models. In Geo-Information for Disaster Management; Springer: Berlin/Heidelberg, Germany, 2005; pp. 883–899. [Google Scholar]
  30. Butkiewicz, T.; Chang, R.; Wartell, Z.; Ribarsky, W. Visual analysis and semantic exploration of urban lidar change detection. Comput. Graph. Forum 2008, 27, 903–910. [Google Scholar] [CrossRef]
  31. Gilani, S.A.N.; Awrangjeb, M.; Lu, G. Segmentation of Airborne Point Cloud Data for Automatic Building Roof Extraction. GISci. Remote Sens. 2018, 55, 63–89. [Google Scholar] [CrossRef]
  32. Awrangjeb, M. Using point cloud data to identify, trace, and regularize the outlines of buildings. Int. J. Remote Sens. 2016, 37, 551–579. [Google Scholar] [CrossRef] [Green Version]
  33. Verma, V.; Kumar, R.; Hsu, S. 3D Building Detection and Modeling from Aerial LIDAR Data. In Proceedings of the Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 2, pp. 2213–2220. [Google Scholar]
  34. Perera, G.S.N.; Maas, H.G. Cycle graph analysis for 3D roof structure modelling: Concepts and performance. ISPRS J. Photogramm. Remote Sens. 2014, 93, 213–226. [Google Scholar] [CrossRef]
  35. Siddiqui, F.U.; Teng, S.W.; Awrangjeb, M.; Lu, G. A Robust Gradient Based Method for Building Extraction from LiDAR and Photogrammetric Imagery. Sensors 2016, 16, 1110. [Google Scholar] [CrossRef] [PubMed]
  36. Awrangjeb, M.; Lu, G.; Fraser, C.S. Performance comparisons of contour-based corner detectors. IEEE Trans. Image Process. 2012, 21, 4167–4179. [Google Scholar] [CrossRef] [PubMed]
  37. Olsen, B.P.; Knudsen, T. Automated change detection for validation and update of geodata. In Proceedings of the 6th Geomatic Week, Barcelona, Spain, 8–10 February 2005. [Google Scholar]
  38. Matikainen, L.; Hyyppä, J.; Ahokas, E.; Markelin, L.; Kaartinen, H. Automatic detection of buildings and changes in buildings for updating of maps. Remote Sens. 2010, 2, 1217–1248. [Google Scholar] [CrossRef]
Figure 1. The workflow of the proposed 3-D building modelling technique.
Figure 1. The workflow of the proposed 3-D building modelling technique.
Remotesensing 10 01512 g001
Figure 2. Insertion of a missing plane: (a) test building image for demonstration; (b) input roof planes for corresponding building and the location of a missing plane; (c) new roof plane using unsegmented LiDAR points (green) and points from neighbours; and (d) 3-D intersection lines between roof planes.
Figure 2. Insertion of a missing plane: (a) test building image for demonstration; (b) input roof planes for corresponding building and the location of a missing plane; (c) new roof plane using unsegmented LiDAR points (green) and points from neighbours; and (d) 3-D intersection lines between roof planes.
Remotesensing 10 01512 g002
Figure 3. Insertion of a real vertical plane: (a) test building image for demonstration; (b) input roof planes for corresponding building and location of missing plane; (c) insertion of a new vertical plane P 4 ; (d) assessing adjacency between existing plane P 3 and new plane P 4 ; (e) insertion of a new vertical plane P 5 between P 3 and P 4 ; and (f) 3-D view of building roof planes and intersection lines for construction of interrelation between roof planes.
Figure 3. Insertion of a real vertical plane: (a) test building image for demonstration; (b) input roof planes for corresponding building and location of missing plane; (c) insertion of a new vertical plane P 4 ; (d) assessing adjacency between existing plane P 3 and new plane P 4 ; (e) insertion of a new vertical plane P 5 between P 3 and P 4 ; and (f) 3-D view of building roof planes and intersection lines for construction of interrelation between roof planes.
Remotesensing 10 01512 g003
Figure 4. Determination of ridge intersection points: (a) roof planes and ridge (intersection) lines; (b) Roof topology graph; (c) closed cycles; and (d) corresponding ridge intersection points.
Figure 4. Determination of ridge intersection points: (a) roof planes and ridge (intersection) lines; (b) Roof topology graph; (c) closed cycles; and (d) corresponding ridge intersection points.
Remotesensing 10 01512 g004
Figure 5. Determination of edge intersection points: (a) edge points (ridge to building boundary intersection points); and (b) 3-D view of building showing edge points.
Figure 5. Determination of edge intersection points: (a) edge points (ridge to building boundary intersection points); and (b) 3-D view of building showing edge points.
Remotesensing 10 01512 g005
Figure 6. Individual roof segments and building model: (a) a roof segment is shown with a sequence of ridge and edge points ( N 1 to N 6 ); and (b) roof model of the sample building.
Figure 6. Individual roof segments and building model: (a) a roof segment is shown with a sequence of ridge and edge points ( N 1 to N 6 ); and (b) roof model of the sample building.
Remotesensing 10 01512 g006
Figure 7. Complete building model for the sample building: (a) aerial image; and (b) 3-D model.
Figure 7. Complete building model for the sample building: (a) aerial image; and (b) 3-D model.
Remotesensing 10 01512 g007
Figure 8. The flow diagram of the proposed building change detection method.
Figure 8. The flow diagram of the proposed building change detection method.
Remotesensing 10 01512 g008
Figure 9. Extracting X and Y coordinates of a point of the cyan plane, where X and Y values are highlighted by a purple box.
Figure 9. Extracting X and Y coordinates of a point of the cyan plane, where X and Y values are highlighted by a purple box.
Remotesensing 10 01512 g009
Figure 10. Introduction of five different changes: (a) reference data; (b) addition of height to buildings; (c) addition of a veranda; (d) removal of verandas; (e) addition of a new building; and (f) relocation of building positions.
Figure 10. Introduction of five different changes: (a) reference data; (b) addition of height to buildings; (c) addition of a veranda; (d) removal of verandas; (e) addition of a new building; and (f) relocation of building positions.
Remotesensing 10 01512 g010
Figure 11. Relationship between the connected building planes; black, red, and magenta arrows represent the parent-to-child, parent-to-parent, and child-to-child connections, respectively. This building is the one within the orange rectangle in Figure 10a and its wall plane numbers are shown outside the building boundary.
Figure 11. Relationship between the connected building planes; black, red, and magenta arrows represent the parent-to-child, parent-to-parent, and child-to-child connections, respectively. This building is the one within the orange rectangle in Figure 10a and its wall plane numbers are shown outside the building boundary.
Remotesensing 10 01512 g011
Figure 12. A graph (data structure) for a building model shown within the orange rectangle in Figure 10a: roof planes are blue nodes; wall planes are red nodes; parent-to-child connections are in green edges (where an arrowhead indicates a child side); and sibling connections are in black edges.
Figure 12. A graph (data structure) for a building model shown within the orange rectangle in Figure 10a: roof planes are blue nodes; wall planes are red nodes; parent-to-child connections are in green edges (where an arrowhead indicates a child side); and sibling connections are in black edges.
Remotesensing 10 01512 g012
Figure 13. Finding the 3-D plane changes between the reference and test models: Height intensity images for: (a) reference models; (b) test models; (c) absolute height difference image (test minus reference); and (d) modified building regions.
Figure 13. Finding the 3-D plane changes between the reference and test models: Height intensity images for: (a) reference models; (b) test models; (c) absolute height difference image (test minus reference); and (d) modified building regions.
Remotesensing 10 01512 g013
Figure 14. Classification of building planes into five groups, i.e., unchanged (yellow), new (red), modified (blue), partially-modified (cyan) and demolished (green) planes: (a) roof planes only; and (b) roof and wall planes.
Figure 14. Classification of building planes into five groups, i.e., unchanged (yellow), new (red), modified (blue), partially-modified (cyan) and demolished (green) planes: (a) roof planes only; and (b) roof and wall planes.
Remotesensing 10 01512 g014
Figure 15. 3-D reconstructed models from Aitkenvale dataset: (a) aerial image; (b) LiDAR points of input roof planes; (c) building roof models; and (d) 3-D building models.
Figure 15. 3-D reconstructed models from Aitkenvale dataset: (a) aerial image; (b) LiDAR points of input roof planes; (c) building roof models; and (d) 3-D building models.
Remotesensing 10 01512 g015
Figure 16. 3-D reconstructed models from Hervey Bay dataset: (a) aerial image; (b) LiDAR points of input roof planes; (c) building roof models; and (d) 3-D building models.
Figure 16. 3-D reconstructed models from Hervey Bay dataset: (a) aerial image; (b) LiDAR points of input roof planes; (c) building roof models; and (d) 3-D building models.
Remotesensing 10 01512 g016
Figure 17. Issues in building modelling: The first row shows the building images and the second row illustrates the building models: (eg) are indicated in Figure 16d.
Figure 17. Issues in building modelling: The first row shows the building images and the second row illustrates the building models: (eg) are indicated in Figure 16d.
Remotesensing 10 01512 g017
Figure 18. Selected qualitative results of Aitkenvale (AV) reference site after applying the proposed change detection method: (a) AV reference models; and change detections in test sites: (b) AV(17); (c) AV(26); (d) AV(29); (e) AV(30); and (f) AV(31) (Table 1). The reference models in (a) are shown in pink and grey colours. In the change detection results (bf), the unchanged, new, modified, partially-modified and demolished planes are marked by yellow, red, blue, cyan and green colours, respectively.
Figure 18. Selected qualitative results of Aitkenvale (AV) reference site after applying the proposed change detection method: (a) AV reference models; and change detections in test sites: (b) AV(17); (c) AV(26); (d) AV(29); (e) AV(30); and (f) AV(31) (Table 1). The reference models in (a) are shown in pink and grey colours. In the change detection results (bf), the unchanged, new, modified, partially-modified and demolished planes are marked by yellow, red, blue, cyan and green colours, respectively.
Remotesensing 10 01512 g018
Figure 19. Selected qualitative results of Hervey Bay (HB) reference site after applying the proposed change detection method: (a) HB reference models; and change detections in test sites: (b) HB(17); (c) HB(26); (d) HB(29); (e) HB(30); and (f) HB(31) (Table 1). The reference models in (a) are shown in pink and grey colours. In the change detection results (bf), the unchanged, new, modified, partially-modified and demolished planes are marked by yellow, red, blue, cyan and green colours, respectively.
Figure 19. Selected qualitative results of Hervey Bay (HB) reference site after applying the proposed change detection method: (a) HB reference models; and change detections in test sites: (b) HB(17); (c) HB(26); (d) HB(29); (e) HB(30); and (f) HB(31) (Table 1). The reference models in (a) are shown in pink and grey colours. In the change detection results (bf), the unchanged, new, modified, partially-modified and demolished planes are marked by yellow, red, blue, cyan and green colours, respectively.
Remotesensing 10 01512 g019
Table 1. Operations performed on reference models from Aitkenvale (AV) and Hervey Bay (HB) datasets to generate test models. Tick symbol shows a particular operation applied on a corresponding dataset. Ah, addition of height; Ab, addition of building; Av, addition of veranda; Rv, removal of veranda; Bp, building position change; Rob, rotation of building; Rp, removal of plane.
Table 1. Operations performed on reference models from Aitkenvale (AV) and Hervey Bay (HB) datasets to generate test models. Tick symbol shows a particular operation applied on a corresponding dataset. Ah, addition of height; Ab, addition of building; Av, addition of veranda; Rv, removal of veranda; Bp, building position change; Rob, rotation of building; Rp, removal of plane.
Test Interchange
SitesAhAbAvRv3 Bp5 BpRobRp
AV(1)HB(1) 🗸
AV(2)HB(2)🗸
AV(3)HB(3) 🗸
AV(4)HB(4) 🗸
AV(5)HB(5)🗸 🗸
AV(6)HB(6)🗸 🗸
AV(7)HB(7) 🗸 🗸
AV(8)HB(8) 🗸 🗸
AV(9)HB(9)🗸🗸 🗸
AV(10)HB(10)🗸🗸 🗸
AV(11)HB(11) 🗸
AV(12)HB(12) 🗸 🗸
AV(13)HB(13) 🗸 🗸
AV(14)HB(14)🗸 🗸
AV(15)HB(15) 🗸🗸
AV(16)HB(16) 🗸🗸 🗸
AV(17)HB(17)🗸🗸🗸 🗸
AV(18)HB(18) 🗸🗸 🗸
AV(19)HB(19)🗸🗸🗸 🗸
AV(20)HB(20)🗸 🗸
AV(21)HB(21)🗸 🗸🗸
AV(22)HB(22)🗸 🗸 🗸
AV(23)HB(23) 🗸🗸
AV(24)HB(24) 🗸 🗸🗸
AV(25)HB(25) 🗸 🗸 🗸
AV(26)HB(26)🗸🗸 🗸🗸
AV(27)HB(27)🗸🗸 🗸 🗸
AV(28)HB(28) 🗸
AV(29)HB(29) 🗸
AV(30)HB(30) 🗸
AV(31)HB(31) 🗸
Table 2. Performance in insertion of missing planes. For all buildings in a scene, N r e f is the number of reference planes, N i n p u t is the number of input planes from [9] and N m i s s i n g is the number of missing planes in input plane sets. For the proposed method, N r e c o n is the number of planes in reconstructed models, N l i d a r is the number of inserted LiDAR-based planes, N v e r t i c a l is the number of inserted vertical planes, N t o t a l is the number of total inserted planes, and N s t i l l m i s s is the number of still missing planes.
Table 2. Performance in insertion of missing planes. For all buildings in a scene, N r e f is the number of reference planes, N i n p u t is the number of input planes from [9] and N m i s s i n g is the number of missing planes in input plane sets. For the proposed method, N r e c o n is the number of planes in reconstructed models, N l i d a r is the number of inserted LiDAR-based planes, N v e r t i c a l is the number of inserted vertical planes, N t o t a l is the number of total inserted planes, and N s t i l l m i s s is the number of still missing planes.
Test-CaseTotal PlanesInserted Planes
N ref N input N missing N recon N lidar N vertical N total N stillmiss
AV25241291450
HB1671472015874119
Table 3. Roof planes evaluation results using threshold-free reference classification of Australian datasets. C m , completeness; C r , correctness; Q l , quality in percentage; C m p , pixel completeness; C r p , pixel correctness; Q l p , pixel quality; C r d , detection cross-lap (under-segmentation); C r r , reference cross-lap (over-segmentation) rates; O e , area omission error; C e , area commission error; R M S X Y , planimetric accuracy (metres); R M S E z , height accuracy (metres).
Table 3. Roof planes evaluation results using threshold-free reference classification of Australian datasets. C m , completeness; C r , correctness; Q l , quality in percentage; C m p , pixel completeness; C r p , pixel correctness; Q l p , pixel quality; C r d , detection cross-lap (under-segmentation); C r r , reference cross-lap (over-segmentation) rates; O e , area omission error; C e , area commission error; R M S X Y , planimetric accuracy (metres); R M S E z , height accuracy (metres).
Test-CasePer-Plane ObjectSegmentationPer-Plane PixelError in Area RMS XY RMSE z
C m C r Q l C rd C rr C mp C rp Q lp O e C e
Input roof planes [9]
AV95.65100.095.654.5088.9693.6383.8911.05.90.020.03
HB85.6295.3382.188.70.573.4482.1363.3226.5517.860.390.03
Proposed 3-D reconstructed roof planes
AV100.0100.0100.00090.9594.0285.149.05.10.020.03
HB88.0298.086.477.90.876.3885.4372.4223.6114.860.340.02
Table 4. Building change detection results for 2-D (size) changes in the AV site. Object-based: C m , completeness; C r , correctness; Q l , quality ( C m 10 , C r 10 Q l 10 and C m 50 , C r 50 Q l 50 are for building planes over 10 m2 and 50 m2, respectively). Pixel-based: C m p , completeness; C r p , correctness; Q l p , quality are in percentage.
Table 4. Building change detection results for 2-D (size) changes in the AV site. Object-based: C m , completeness; C r , correctness; Q l , quality ( C m 10 , C r 10 Q l 10 and C m 50 , C r 50 Q l 50 are for building planes over 10 m2 and 50 m2, respectively). Pixel-based: C m p , completeness; C r p , correctness; Q l p , quality are in percentage.
Modified SitesObject-BasedPixel-Based
C m C r Q l C m 10 C r 10 Q l 10 C m 50 C r 50 Q l 50 C mp C rp Q lp
AV(1)10095.295.210010010010010010093.697.491.4
AV(2)10095.895.81001001001001001009597.492.6
AV(3)10096.096.010010010010010010095.197.492.7
AV(4)10096.196.110010010010010010094.086.882.2
AV(5)10095.895.81001001001001001009597.492.6
AV(6)10095.895.810010010010010010094.997.492.6
AV(7)10096.196.110010010010010010094.496.591.3
AV(8)10096.196.110010010010010010094.396.591.3
AV(9)10096.196.110010010010010010094.496.691.3
AV(10)10096.196.110010010010010010094.496.691.3
AV(11)10095.895.810010010010010010094.997.392.6
AV(12)10096.096.010010010010010010095.197.492.7
AV(13)10096.096.010010010010010010095.097.492.7
AV(14)100969610010010010010010095.197.492.7
AV(15)10096.496.410010010010010010094.496.591.4
AV(16)10096.496.410010010010010010094.496.591.4
AV(17)10096.496.410010010010010010094.496.591.4
AV(18)10096.496.410010010010010010094.496.591.3
AV(19)10096.496.410010010010010010094.496.591.4
AV(20)10095.295.210010010010010010093.697.491.4
AV(21)10095.295.210010010010010010093.697.591.4
AV(22)10095.295.210010010010010010093.697.591.4
AV(23)10095.695.610010010010010010092.786.5181.0
AV(24)10095.695.610010010010010010093.296.690.2
AV(25)10095.695.610010010010010010093.196.690.2
AV(26)10095.695.610010010010010010093.296.690.3
AV(27)10095.695.610010010010010010093.296.690.3
AV(28)10095.895.810010010010010010095.097.492.6
AV(29)10096.196.110010010010010010083.097.281.1
AV(30)10095.895.810010010010010010095.197.392.7
AV(31)10095.295.210010010010010010093.496.690.5
Average10095.895.810010010010010010093.996.490.7
Table 5. Building change detection results for changes in the HB site. Object-based: C m , completeness; C r , correctness; Q l , quality ( C m , C r 10 Q l 10 and C m 50 , C r 50 Q l 50 are for building planes over 10 m2 and 50 m2, respectively). Pixel-based: C m p , completeness; C r p , correctness; Q l p , quality are in percentage.
Table 5. Building change detection results for changes in the HB site. Object-based: C m , completeness; C r , correctness; Q l , quality ( C m , C r 10 Q l 10 and C m 50 , C r 50 Q l 50 are for building planes over 10 m2 and 50 m2, respectively). Pixel-based: C m p , completeness; C r p , correctness; Q l p , quality are in percentage.
Modified SitesObject-BasedPixel-Based
C m C r Q l C m 10 C r 10 Q l 10 C m 50 C r 50 Q l 50 C mp C rp Q lp
HB(1)90.999.288.598.010096.510010010084.496.982.2
HB(2)90.599.388.597.410096.510010010084.799.684.4
HB(3)90.599.388.697.410096.510010010085.099.684.8
HB(4)90.999.389.097.510096.710010010085.499.685.1
HB(5)90.599.388.597.410096.510010010084.799.684.4
HB(6)90.599.388.597.410096.510010010084.799.684.4
HB(7)90.999.389.097.510096.710010010085.499.685.1
HB(8)90.999.389.097.510096.710010010085.499.685.1
HB(9)90.999.389.097.610096.710010010085.499.685.1
HB(10)90.999.389.097.510096.710010010085.499.685.1
HB(11)90.499.388.597.410096.510010010084.799.684.4
HB(12)90.599.388.697.410096.510010010085.099.684.8
HB(13)90.599.388.697.410096.510010010085.099.684.8
HB(14)90.599.388.697.410096.510010010085.099.684.8
HB(15)90.593.984.397.495.292.110010010085.091.879.0
HB(16)90.999.389.197.510096.710010010085.799.685.5
HB(17)90.999.389.197.610096.710010010085.799.685.5
HB(18)90.999.389.197.510096.710010010085.799.685.5
HB(19)90.999.389.197.610096.710010010085.899.685.5
HB(20)90.999.388.598.010096.510010010084.596.982.3
HB(21)90.999.388.598.010096.510010010084.596.982.3
HB(22)90.999.388.598.010096.510010010084.596.982.3
HB(23)91.399.389.098.110096.710010010085.297.183.1
HB(24)91.399.389.098.110096.710010010085.297.183.1
HB(25)91.399.389.098.110096.710010010085.297.183.1
HB(26)91.399.389.098.110096.710010010085.397.183.1
HB(27)91.399.389.098.110096.710010010085.397.183.1
HB(28)90.499.288.597.410096.510010010084.799.684.4
HB(29)90.299.288.297.310096.410010010085.199.684.4
HB(30)90.499.288.597.410096.510010010084.799.684.4
HB(31)90.198.587.497.399.295.610010010079.495.776.7
Average90.899.188.697.699.896.410010010084.998.583.8

Share and Cite

MDPI and ACS Style

Awrangjeb, M.; Gilani, S.A.N.; Siddiqui, F.U. An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection. Remote Sens. 2018, 10, 1512. https://doi.org/10.3390/rs10101512

AMA Style

Awrangjeb M, Gilani SAN, Siddiqui FU. An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection. Remote Sensing. 2018; 10(10):1512. https://doi.org/10.3390/rs10101512

Chicago/Turabian Style

Awrangjeb, Mohammad, Syed Ali Naqi Gilani, and Fasahat Ullah Siddiqui. 2018. "An Effective Data-Driven Method for 3-D Building Roof Reconstruction and Robust Change Detection" Remote Sensing 10, no. 10: 1512. https://doi.org/10.3390/rs10101512

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop