skip to main content

Title: Engineering deep learning methods on automatic detection of damage in infrastructure due to extreme events

This paper presents a few comprehensive experimental studies for automated Structural Damage Detection (SDD) in extreme events using deep learning methods for processing 2D images. In the first study, a 152-layer Residual network (ResNet) is utilized to classify multiple classes in eight SDD tasks, which include identification of scene levels, damage levels, and material types. The proposed ResNet achieved high accuracy for each task while the positions of the damage are not identifiable. In the second study, the existing ResNet and a segmentation network (U-Net) are combined into a new pipeline, cascaded networks, for categorizing and locating structural damage. The results show that the accuracy of damage detection is significantly improved compared to only using a segmentation network. In the third and fourth studies, end-to-end networks are developed and tested as a new solution to directly detect cracks and spalling in the image collections of recent large earthquakes. One of the proposed networks can achieve an accuracy above 67 .6% for all tested images at various scales and resolutions, and shows its robustness for these human-free detection tasks. As a preliminary field study, we applied the proposed method to detect damage in a concrete structure that was tested to study its progressive collapse performance. The experiments indicate that these solutions for automatic detection of structural damage using deep learning methods are feasible and promising. The training datasets and codes will be made available for the public upon the publication of this paper.

more » « less
Award ID(s):
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Structural Health Monitoring
Page Range / eLocation ID:
p. 338-352
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Vision-based structural health monitoring (SHM) has become an important approach to recognize and evaluate structural damage after natural disasters. Deep convolutional neural networks (CNNs) have recently attained a breakthrough in computer vision field, in particular for image classification task. In this article, we adopted deep residual neural network (ResNet) whose residual representations and shortcut connections mechanism has gained significant performance in various computer vision tasks. In addition, we applied transfer learning due to a relatively small number of training images. To test our approach, we used the dataset from the 2018 PEER Hub ImageNet Challenge distributed by Pacific Earthquake Engineering Research Center. This challenge proposed eight structural damage detection tasks: scene classification, damage check, spalling condition, material type, collapse check, component type, damage level and damage type which can be categorized as binary and multi-class (3 or 4 classes) classification problems. Our experiments with eight different tasks showed that reliable classification can be obtained for some tasks. Corresponding above eight tasks, classification accuracy varied from 63.1% to 99.4%. Our approach has attained third place for overall tasks in this challenge. Through the individual observation of training dataset, it is found that there are a large number of confusing images. Therefore, it is believed that the accuracy will be improved after making a precise training data. 
    more » « less
  2. Abstract. In this paper, two different convolutional neural networks (CNNs) are applied on images for automated structural damage detection (SDD) in earthquake damaged structures and cracking localization (e.g., detection of cracks, their widths and distributions) at various scales, such as pixel level, object level, and structural level. The proposed method has two main steps: 1) diagnosis, and 2) localization of cracking or other damage. At first a residual CNN with transfer learning is employed to classify the damage in the structures and structural components. This step performs damage detection using two public datasets. The second step uses another CNN with U-Net structure to locate the cracking on low resolution images. The implementations using public and self-collected datasets show promising performance for a problem that had remained a challenge in the structure engineering field for a long time and indicate that the proposed approach can perform detection and localization of structural damage with an acceptable accuracy. 
    more » « less
  3. In this paper, we develop and implement end-to-end deep learning approaches to automatically detect two important types of structural failures, cracks and spalling, of buildings and bridges in extreme events such as major earthquakes. A total of 2,229 images were annotated, and are used to train and validate three newly developed Mask Regional Convolutional Neural Networks (Mask R-CNNs). In addition, three sets of public images for different disasters were used to test the accuracy of these models. For detecting and marking these two types of structural failures, one of proposed methods can achieve an accuracy of 67.6% and 81.1%, respectively, on low- and high-resolution images collected from field investigations. The results demonstrate that it is feasible to use the proposed end-to-end method for automatically locating and segmenting the damage using 2D images which can help human experts in cases of disasters. 
    more » « less
  4. As one of the popular deep learning methods, deep convolutional neural networks (DCNNs) have been widely adopted in segmentation tasks and have received positive feedback. However, in segmentation tasks, DCNN-based frameworks are known for their incompetence in dealing with global relations within imaging features. Although several techniques have been proposed to enhance the global reasoning of DCNN, these models are either not able to gain satisfying performances compared with traditional fully-convolutional structures or not capable of utilizing the basic advantages of CNN-based networks (namely the ability of local reasoning). In this study, compared with current attempts to combine FCNs and global reasoning methods, we fully extracted the ability of self-attention by designing a novel attention mechanism for 3D computation and proposed a new segmentation framework (named 3DTU) for three-dimensional medical image segmentation tasks. This new framework processes images in an end-to-end manner and executes 3D computation on both the encoder side (which contains a 3D transformer) and the decoder side (which is based on a 3D DCNN). We tested our framework on two independent datasets that consist of 3D MRI and CT images. Experimental results clearly demonstrate that our method outperforms several state-of-the-art segmentation methods in various metrics. 
    more » « less
  5. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively. 
    more » « less