Classification of structural building damage grades from multi-temporal photogrammetric point clouds using a machine learning model trained on virtual laser scanning data
Automatic damage assessment based on UAV-derived 3D point clouds can provide fast information on the damage situation after an earthquake. However, the assessment of multiple damage grades is challenging due to the variety in damage patterns and limited transferability of existing methods to other geographic regions or data sources. We present a novel approach to automatically assess multi-class building damage from real-world multi-temporal point clouds using a machine learning model trained on virtual laser scanning (VLS) data. We (1) identify object-specific change features, (2) separate changed and unchanged building parts, (3) train a random forest machine learning model with VLS data based on object-specific change features, and (4) use the classifier to assess building damage in real-world point clouds from photogrammetry-based dense image matching (DIM). We evaluate classifiers trained on different input data with respect to their capacity to classify three damage grades (heavy, extreme, destruction) in pre- and post-event DIM point clouds of a real earthquake event. Our approach is transferable with respect to multi-source input point clouds used for training (VLS) and application (DIM) of the model. We further achieve geographic transferability of the model by training it on simulated data of geometric change which characterises relevant damage grades across different geographic regions. The model yields high multi-target classification accuracies (overall accuracy: 92.0 region-specific training data (< 3 real-world region-specific training data (< 2 consider our approach relevant for applications where timely information on the damage situation is required and sufficient real-world training data is not available.
READ FULL TEXT