skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Similarity learning to enable building searches in post‐event image data
Reconnaissance teams collect perishable data after each disaster to learn about building performance. However, often these large image sets are not adequately curated, nor do they have sufficient metadata (e.g., GPS), hindering any chance to identify images from the same building when collected by different reconnaissance teams. In this study, Siamese convolutional neural networks (S‐CNN) are implemented and repurposed to establish a building search capability suitable for post‐disaster imagery. This method can automatically rank and retrieve corresponding building images in response to a single query using an image. In the demonstration, we utilize real‐world images collected from 174 reinforced‐concrete buildings affected by the 2016 Southern Taiwan and the 2017 Pohang (South Korea) earthquake events. A quantitative performance evaluation is conducted by examining two metrics introduced for this application: Similarity Score (SS) and Similarity Rank (SR).  more » « less
Award ID(s):
1835473
PAR ID:
10308793
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  
Date Published:
Journal Name:
Computer-Aided Civil and Infrastructure Engineering
ISSN:
1093-9687
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In the aftermath of earthquake events, reconnaissance teams are deployed to gather vast amounts of images, moving quickly to capture perishable data to document the performance of infrastructure before they are destroyed. Learning from such data enables engineers to gain new knowledge about the real-world performance of structures. This new knowledge, extracted from such visual data, is critical to mitigate the risks (e.g., damage and loss of life) associated with our built environment in future events. Currently, this learning process is entirely manual, requiring considerable time and expense. Thus, unfortunately, only a tiny portion of these images are shared, curated, and actually utilized. The power of computers and artificial intelligence enables a new approach to organize and catalog such visual data with minimal manual effort. Here we discuss the development and deployment of an organizational system to automate the analysis of large volumes of post-disaster visual data, images. Our application, named the Automated Reconnaissance Image Organizer (ARIO), allows a field engineer to rapidly and automatically categorize their reconnaissance images. ARIO exploits deep convolutional neural networks and trained classifiers, and yields a structured report combined with useful metadata. Classifiers are trained using our ground-truth visual database that includes over 140,000 images from past earthquake reconnaissance missions to study post-disaster buildings in the field. Here we discuss the novel deployment of the ARIO application within a cloud-based system that we named VISER (Visual Structural Expertise Replicator), a comprehensive cloud-based visual data analytics system with a novel Netflix-inspired technical search capability. Field engineers can exploit this research and our application to search an image repository for visual content. We anticipate that these tools will empower engineers to more rapidly learn new lessons from earthquakes using reconnaissance data. 
    more » « less
  2. Image data collected after natural disasters play an important role in the forensics of structure failures. However, curating and managing large amounts of post-disaster imagery data is challenging. In most cases, data users still have to spend much effort to find and sort images from the massive amounts of images archived for past decades in order to study specific types of disasters. This paper proposes a new machine learning based approach for automating the labeling and classification of large volumes of post-natural disaster image data to address this issue. More specifically, the proposed method couples pre-trained computer vision models and a natural language processing model with an ontology tailed to natural disasters to facilitate the search and query of specific types of image data. The resulting process returns each image with five primary labels and similarity scores, representing its content based on the developed word-embedding model. Validation and accuracy assessment of the proposed methodology was conducted with ground-level residential building panoramic images from Hurricane Harvey. The computed primary labels showed a minimum average difference of 13.32% when compared to manually assigned labels. This versatile and adaptable solution offers a practical and valuable solution for automating image labeling and classification tasks, with the potential to be applied to various image classifications and used in different fields and industries. The flexibility of the method means that it can be updated and improved to meet the evolving needs of various domains, making it a valuable asset for future research and development. 
    more » « less
  3. Building an annotated damage image database is the first step to support AI-assisted hurricane impact analysis. Up to now, annotated datasets for model training are insufficient at a local level despite abundant raw data that have been collected for decades. This paper provides a systematic approach for establishing an annotated hurricane-damaged building image database to support AI-assisted damage assessment and analysis. Optimal rectilinear images were generated from panoramic images collected from Hurricane Harvey, Texas 2017. Then, deep learning models, including Amazon Web Service (AWS) Rekognition and Mask R-CNN (Region Based Convolutional Neural Networks), were retrained on the data to develop a pipeline for building detection and structural component extraction. A web-based dashboard was developed for building data management and processed image visualization along with detected structural components and their damage ratings. The proposed AI-assisted labeling tool and trained models can intelligently and rapidly assist potential users such as hazard researchers, practitioners, and government agencies on natural disaster damage management. 
    more » « less
  4. After a disaster strikes an urban area, damage to the façades of a building may produce dangerous falling hazards that jeopardize pedestrians and vehicles. Thus, building façades must be rapidly inspected to prevent potential loss of life and property damage. Harnessing the capacity to use new vision sensors and associated sensing platforms, such as unmanned aerial vehicles (UAVs) would expedite this process and alleviate spatial and temporal limitations typically associated with human-based inspection in high-rise buildings. In this paper, we have developed an approach to perform rapid and accurate visual inspection of building façades using images collected from UAVs. An orthophoto corresponding to any reasonably flat region on the building (e.g., a façade or building side) is automatically constructed using a structure-from-motion (SfM) technique, followed by image stitching and blending. Based on the geometric relationship between the collected images and the constructed orthophoto, high-resolution region-of-interest are automatically extracted from the collected images, enabling efficient visual inspection. We successfully demonstrate the capabilities of the technique using an abandoned building of which a façade has damaged building components (e.g., window panes or external drainage pipes). 
    more » « less
  5. null (Ed.)
    Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building. 
    more » « less