skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Thursday, May 23 until 2:00 AM ET on Friday, May 24 due to maintenance. We apologize for the inconvenience.


Title: Similarity learning to enable building searches in post‐event image data
Reconnaissance teams collect perishable data after each disaster to learn about building performance. However, often these large image sets are not adequately curated, nor do they have sufficient metadata (e.g., GPS), hindering any chance to identify images from the same building when collected by different reconnaissance teams. In this study, Siamese convolutional neural networks (S‐CNN) are implemented and repurposed to establish a building search capability suitable for post‐disaster imagery. This method can automatically rank and retrieve corresponding building images in response to a single query using an image. In the demonstration, we utilize real‐world images collected from 174 reinforced‐concrete buildings affected by the 2016 Southern Taiwan and the 2017 Pohang (South Korea) earthquake events. A quantitative performance evaluation is conducted by examining two metrics introduced for this application: Similarity Score (SS) and Similarity Rank (SR).  more » « less
Award ID(s):
1835473
NSF-PAR ID:
10308793
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  
Date Published:
Journal Name:
Computer-Aided Civil and Infrastructure Engineering
ISSN:
1093-9687
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    After a disaster, teams of structural engineers collect vast amounts of images from damaged buildings to obtain new knowledge and extract lessons from the event. However, in many cases, the images collected are captured without sufficient spatial context. When damage is severe, it may be quite difficult to even recognize the building. Accessing images of the predisaster condition of those buildings is required to accurately identify the cause of the failure or the actual loss in the building. Here, to address this issue, we develop a method to automatically extract pre‐event building images from 360° panorama images (panoramas). By providing a geotagged image collected near the target building as the input, panoramas close to the input image location are automatically downloaded through street view services (e.g., Google or Bing in the United States). By computing the geometric relationship between the panoramas and the target building, the most suitable projection direction for each panorama is identified to generate high‐quality 2D images of the building. Region‐based convolutional neural networks are exploited to recognize the building within those 2D images. Several panoramas are used so that the detected building images provide various viewpoints of the building. To demonstrate the capability of the technique, we consider residential buildings in Holiday Beach in Rockport, Texas, United States, that experienced significant devastation in Hurricane Harvey in 2017. Using geotagged images gathered during actual postdisaster building reconnaissance missions, we verify the method by successfully extracting residential building images from Google Street View images, which were captured before the event.

     
    more » « less
  2. null (Ed.)
    In the aftermath of earthquake events, reconnaissance teams are deployed to gather vast amounts of images, moving quickly to capture perishable data to document the performance of infrastructure before they are destroyed. Learning from such data enables engineers to gain new knowledge about the real-world performance of structures. This new knowledge, extracted from such visual data, is critical to mitigate the risks (e.g., damage and loss of life) associated with our built environment in future events. Currently, this learning process is entirely manual, requiring considerable time and expense. Thus, unfortunately, only a tiny portion of these images are shared, curated, and actually utilized. The power of computers and artificial intelligence enables a new approach to organize and catalog such visual data with minimal manual effort. Here we discuss the development and deployment of an organizational system to automate the analysis of large volumes of post-disaster visual data, images. Our application, named the Automated Reconnaissance Image Organizer (ARIO), allows a field engineer to rapidly and automatically categorize their reconnaissance images. ARIO exploits deep convolutional neural networks and trained classifiers, and yields a structured report combined with useful metadata. Classifiers are trained using our ground-truth visual database that includes over 140,000 images from past earthquake reconnaissance missions to study post-disaster buildings in the field. Here we discuss the novel deployment of the ARIO application within a cloud-based system that we named VISER (Visual Structural Expertise Replicator), a comprehensive cloud-based visual data analytics system with a novel Netflix-inspired technical search capability. Field engineers can exploit this research and our application to search an image repository for visual content. We anticipate that these tools will empower engineers to more rapidly learn new lessons from earthquakes using reconnaissance data. 
    more » « less
  3. Image data collected after natural disasters play an important role in the forensics of structure failures. However, curating and managing large amounts of post-disaster imagery data is challenging. In most cases, data users still have to spend much effort to find and sort images from the massive amounts of images archived for past decades in order to study specific types of disasters. This paper proposes a new machine learning based approach for automating the labeling and classification of large volumes of post-natural disaster image data to address this issue. More specifically, the proposed method couples pre-trained computer vision models and a natural language processing model with an ontology tailed to natural disasters to facilitate the search and query of specific types of image data. The resulting process returns each image with five primary labels and similarity scores, representing its content based on the developed word-embedding model. Validation and accuracy assessment of the proposed methodology was conducted with ground-level residential building panoramic images from Hurricane Harvey. The computed primary labels showed a minimum average difference of 13.32% when compared to manually assigned labels. This versatile and adaptable solution offers a practical and valuable solution for automating image labeling and classification tasks, with the potential to be applied to various image classifications and used in different fields and industries. The flexibility of the method means that it can be updated and improved to meet the evolving needs of various domains, making it a valuable asset for future research and development. 
    more » « less
  4. Abstract

    Disasters provide an invaluable opportunity to evaluate contemporary design standards and construction practices; these evaluations have historically relied upon experts, which inherently limited the speed, scope and coverage of post-disaster reconnaissance. However, hybrid assessments that localize data collection and engage remote expertise offer a promising alternative, particularly in challenging contexts. This paper describes a multi-phase hybrid assessment conducting rapid assessments with wide coverage followed by detailed assessments of specific building subclasses following the 2021 M7.2 earthquake in Haiti, where security issues limited international participation. The rapid assessment classified and assigned global damage ratings to over 12,500 buildings using over 40 non-expert local data collectors to feed imagery to dozens of remote engineers. A detailed assessment protocol then conducted component-level evaluations of over 200 homes employing enhanced vernacular construction, identified via machine learning from nearly 40,000 acquired images. A second mobile application guided local data collectors through a systematic forensic documentation of 30 of these homes, providing remote engineers with essential implementation details. In total, this hybrid assessment underscored that performance in the 2021 earthquake fundamentally depended upon the type and consistency of the bracing scheme. The developed assessment tools and mobile apps have been shared as a demonstration of how a hybrid approach can be used for rapid and detailed assessments following major earthquakes in challenging contexts. More importantly, the open datasets generated continue to inform efforts to promote greater use of enhanced vernacular architecture as a multi-hazard resilient typology that can deliver life-safety in low-income countries.

     
    more » « less
  5. Building an annotated damage image database is the first step to support AI-assisted hurricane impact analysis. Up to now, annotated datasets for model training are insufficient at a local level despite abundant raw data that have been collected for decades. This paper provides a systematic approach for establishing an annotated hurricane-damaged building image database to support AI-assisted damage assessment and analysis. Optimal rectilinear images were generated from panoramic images collected from Hurricane Harvey, Texas 2017. Then, deep learning models, including Amazon Web Service (AWS) Rekognition and Mask R-CNN (Region Based Convolutional Neural Networks), were retrained on the data to develop a pipeline for building detection and structural component extraction. A web-based dashboard was developed for building data management and processed image visualization along with detected structural components and their damage ratings. The proposed AI-assisted labeling tool and trained models can intelligently and rapidly assist potential users such as hazard researchers, practitioners, and government agencies on natural disaster damage management. 
    more » « less