Building Surface Damage Quantification and Localization Through Point Feature Analysis
通過點特徵分析對建築表面損傷進行量化和定位
Student thesis: Doctoral Thesis
Author(s)
Related Research Unit(s)
Detail(s)
Awarding Institution | |
---|---|
Supervisors/Advisors |
|
Award date | 30 Oct 2023 |
Link(s)
Permanent Link | https://scholars.cityu.edu.hk/en/theses/theses(edbf1199-c9e9-46f6-ab2a-f33eb6406dc1).html |
---|---|
Other link(s) | Links |
Abstract
Buildings are highly vulnerable to disasters, and damages from such events are substantial. Assessing building damage in these scenarios is critical since an accurate and immediate assessment can assist rescue operations and resource planning just after a disaster hits. In practice, field surveys are the most common damage assessment approach. As per prior research findings, field surveys generate highly accurate assessments for some cases. However, such approaches would not be viable for urban-wise damage assessments due to higher demand for human resources, imposed threats to the assessors from unsettling structures and higher time consumption. The prime expectation of post-disaster damage assessment is to generate a detailed and an accurate report which can derive the damage occurred to each building component without posing any danger to the assessors. Accordingly, this study is designed to address three significant challenges arising from the above expectation, which are, 1) how to obtain building surface data in a complex built environment without posing a threat to the assessors, 2) how to identify and isolate the damaged regions of the building surface and 3) How to localize and quantify the identified damages into respective building components.
Previous studies suggest that generated point clouds of the targeted buildings are popular in assessing structures remotely as they are rich with 3D spatial information. However, point cloud generation of buildings in complex built environments are challenging due to the lack of visibility of the surfaces due to occlusions. External or self-occlusions can interfere with acquiring image data at occluded areas, resulting in incomplete point clouds and poorly textured dense models. Building photogrammetry in the presence of occlusion is rarely investigated, and no studies could be found on capturing occluded surfaces. To address this issue, this study proposes a two-stage image data collection approach utilizing small-scaled unmanned aerial vehicles (UAVs). A novel camera viewpoint generation algorithm is proposed to recapture occluded areas identified from stage 1. Two building scenes (synthetic and real) were utilized to examine the proposed method. Results from a comparative analysis demonstrate a significant improvement in the quality and completeness of the final point cloud, especially in occluded areas, which is about 13%-17% higher compared to the previously suggested methods.
The second part of the study employs the generated point clouds to detect and isolate surface damage. The developed method employed point normals and point colour intensities to detect damaged and non-damaged regions. The DBSCAN algorithm is then utilized to refine and isolate damaged regions. Two synthetic building point clouds with different complexities were created to test the proposed method and further validated with two real-world point clouds and the point cloud generated from the first part. The robustness of the developed method is demonstrated with 100% and 97% precision and 99% and 98% F1 scores, respectively. Validation with the real-world point clouds illustrated that the proposed method could detect damages at any surface location, even with extreme shadow effects and complex surface colour profiles. Semantic segmentation using a deep neural network (DNN) is then introduced to localize the damages. SCF-Net algorithm is primarily used to semantically segment the building point cloud into four main components (columns, ground, roof, and wall), and then a postprocessing step is introduced to improve the segmentation accuracy. Finally, results from the second and third parts of the study are fused to quantify and localize the damages to the building.
This study primarily contributes to the post-disaster impact assessment stage, where critical reconstruction decisions are taken. Furthermore, a component-wise quantified building damage report provides information about the severity of the damage to each building component. This information can be vital for studies such as assessing the structural integrity of the building and the impact of different disasters on different building components. Frameworks proposed in each part of the study deliver the following valuable outcomes: 1) a framework for point cloud generation for structures that are occluded by external objects; 2) a practical and automated damage detection and isolation method, 3) an approach to improve DNN-based semantic segmentation results for building point clouds, and 4) a process to quantify and localize detected damages to the subsequent building component. Moreover, this study contributes to disaster resilience efforts by providing a complete building damage assessment approach from scratch.
Previous studies suggest that generated point clouds of the targeted buildings are popular in assessing structures remotely as they are rich with 3D spatial information. However, point cloud generation of buildings in complex built environments are challenging due to the lack of visibility of the surfaces due to occlusions. External or self-occlusions can interfere with acquiring image data at occluded areas, resulting in incomplete point clouds and poorly textured dense models. Building photogrammetry in the presence of occlusion is rarely investigated, and no studies could be found on capturing occluded surfaces. To address this issue, this study proposes a two-stage image data collection approach utilizing small-scaled unmanned aerial vehicles (UAVs). A novel camera viewpoint generation algorithm is proposed to recapture occluded areas identified from stage 1. Two building scenes (synthetic and real) were utilized to examine the proposed method. Results from a comparative analysis demonstrate a significant improvement in the quality and completeness of the final point cloud, especially in occluded areas, which is about 13%-17% higher compared to the previously suggested methods.
The second part of the study employs the generated point clouds to detect and isolate surface damage. The developed method employed point normals and point colour intensities to detect damaged and non-damaged regions. The DBSCAN algorithm is then utilized to refine and isolate damaged regions. Two synthetic building point clouds with different complexities were created to test the proposed method and further validated with two real-world point clouds and the point cloud generated from the first part. The robustness of the developed method is demonstrated with 100% and 97% precision and 99% and 98% F1 scores, respectively. Validation with the real-world point clouds illustrated that the proposed method could detect damages at any surface location, even with extreme shadow effects and complex surface colour profiles. Semantic segmentation using a deep neural network (DNN) is then introduced to localize the damages. SCF-Net algorithm is primarily used to semantically segment the building point cloud into four main components (columns, ground, roof, and wall), and then a postprocessing step is introduced to improve the segmentation accuracy. Finally, results from the second and third parts of the study are fused to quantify and localize the damages to the building.
This study primarily contributes to the post-disaster impact assessment stage, where critical reconstruction decisions are taken. Furthermore, a component-wise quantified building damage report provides information about the severity of the damage to each building component. This information can be vital for studies such as assessing the structural integrity of the building and the impact of different disasters on different building components. Frameworks proposed in each part of the study deliver the following valuable outcomes: 1) a framework for point cloud generation for structures that are occluded by external objects; 2) a practical and automated damage detection and isolation method, 3) an approach to improve DNN-based semantic segmentation results for building point clouds, and 4) a process to quantify and localize detected damages to the subsequent building component. Moreover, this study contributes to disaster resilience efforts by providing a complete building damage assessment approach from scratch.