skip to main content


Search for: All records

Award ID contains: 1835473

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available July 1, 2024
  2. The Federal Highway Administration (FHWA) mandates biannual bridge inspections to assess the condition of all bridges in the United States. These inspections are recorded in the National Bridge Inventory (NBI) and the respective state’s databases to manage, study, and analyze the data. As FHWA specifications become more complex, inspections require more training and field time. Recently, element-level inspections were added, assigning a condition state to each minor element in the bridge. To address this new requirement, a machine-aided bridge inspection method was developed using artificial intelligence (AI) to assist inspectors. The proposed method focuses on the condition state assessment of cracking in reinforced concrete bridge deck elements. The deep learning-based workflow integrated with image classification and semantic segmentation methods is utilized to extract information from images and evaluate the condition state of cracks according to FHWA specifications. The new workflow uses a deep neural network to extract information required by the bridge inspection manual, enabling the determination of the condition state of cracks in the deck. The results of experimentation demonstrate the effectiveness of this workflow for this application. The method also balances the costs and risks associated with increasing levels of AI involvement, enabling inspectors to better manage their resources. This AI-based method can be implemented by asset owners, such as Departments of Transportation, to better serve communities. 
    more » « less
    Free, publicly-accessible full text available May 1, 2024
  3. The purpose of a routine bridge inspection is to assess the physical and functional condition of a bridge according to a regularly scheduled interval. The Federal Highway Administration (FHWA) requires these inspections to be conducted at least every 2 years. Inspectors use simple tools and visual inspection techniques to determine the conditions of both the elements of the bridge structure and the bridge overall. While in the field, the data is collected in the form of images and notes; after the field work is complete, inspectors need to generate a report based on these data to document their findings. The report generation process includes several tasks: (1) evaluating the condition rating of each bridge element according to FHWA Recording and Coding Guide for Structure Inventory and Appraisal of the Nation’s Bridges; and (2) updating and organizing the bridge inspection images for the report. Both of tasks are time-consuming. This study focuses on assisting with the latter task by developing an artificial intelligence (AI)-based method to rapidly organize bridge inspection images and generate a report. In this paper, an image organization schema based on the FHWA Recording and Coding Guide for the Structure Inventory and Appraisal of the Nation’s Bridges and the Manual for Bridge Element Inspection is described, and several convolutional neural network-based classifiers are trained with real inspection images collected in the field. Additionally, exchangeable image file (EXIF) information is automatically extracted to organize inspection images according to their time stamp. Finally, the Automated Bridge Image Reporting Tool (ABIRT) is described as a browser-based system built on the trained classifiers. Inspectors can directly upload images to this tool and rapidly obtain organized images and associated inspection report with the support of a computer which has an internet connection. The authors provide recommendations to inspectors for gathering future images to make the best use of this tool. 
    more » « less
  4. Collecting massive amounts of image data is a common way to record the post-event condition of buildings, to be used by engineers and researchers to learn from that event. Key information needed to interpret the image data collected during these reconnaissance missions is the location within the building where each image was taken. However, image localization is difficult in an indoor environment, as GPS is not generally available because of weak or broken signals. To support rapid, seamless data collection during a reconnaissance mission, we develop and validate a fully automated technique to provide robust indoor localization while requiring no prior information about the condition or spatial layout of an indoor environment. The technique is meant for large-scale data collection across multiple floors within multiple buildings. A systematic method is designed to separate the reconnaissance data into individual buildings and individual floors. Then, for data within each floor, an optimization problem is formulated to automatically overlay the path onto the structural drawings providing robust results, and subsequently, yielding the image locations. The end-to end technique only requires the data collector to wear an additional inexpensive motion camera, thus, it does not add time or effort to the current rapid reconnaissance protocol. As no prior information about the condition or spatial layout of the indoor environment is needed, this technique can be adapted to a large variety of building environments and does not require any type of preparation in the postevent settings. This technique is validated using data collected from several real buildings. 
    more » « less
  5. null (Ed.)
    In the aftermath of earthquake events, reconnaissance teams are deployed to gather vast amounts of images, moving quickly to capture perishable data to document the performance of infrastructure before they are destroyed. Learning from such data enables engineers to gain new knowledge about the real-world performance of structures. This new knowledge, extracted from such visual data, is critical to mitigate the risks (e.g., damage and loss of life) associated with our built environment in future events. Currently, this learning process is entirely manual, requiring considerable time and expense. Thus, unfortunately, only a tiny portion of these images are shared, curated, and actually utilized. The power of computers and artificial intelligence enables a new approach to organize and catalog such visual data with minimal manual effort. Here we discuss the development and deployment of an organizational system to automate the analysis of large volumes of post-disaster visual data, images. Our application, named the Automated Reconnaissance Image Organizer (ARIO), allows a field engineer to rapidly and automatically categorize their reconnaissance images. ARIO exploits deep convolutional neural networks and trained classifiers, and yields a structured report combined with useful metadata. Classifiers are trained using our ground-truth visual database that includes over 140,000 images from past earthquake reconnaissance missions to study post-disaster buildings in the field. Here we discuss the novel deployment of the ARIO application within a cloud-based system that we named VISER (Visual Structural Expertise Replicator), a comprehensive cloud-based visual data analytics system with a novel Netflix-inspired technical search capability. Field engineers can exploit this research and our application to search an image repository for visual content. We anticipate that these tools will empower engineers to more rapidly learn new lessons from earthquakes using reconnaissance data. 
    more » « less
  6. Reconnaissance teams collect perishable data after each disaster to learn about building performance. However, often these large image sets are not adequately curated, nor do they have sufficient metadata (e.g., GPS), hindering any chance to identify images from the same building when collected by different reconnaissance teams. In this study, Siamese convolutional neural networks (S‐CNN) are implemented and repurposed to establish a building search capability suitable for post‐disaster imagery. This method can automatically rank and retrieve corresponding building images in response to a single query using an image. In the demonstration, we utilize real‐world images collected from 174 reinforced‐concrete buildings affected by the 2016 Southern Taiwan and the 2017 Pohang (South Korea) earthquake events. A quantitative performance evaluation is conducted by examining two metrics introduced for this application: Similarity Score (SS) and Similarity Rank (SR). 
    more » « less
  7. null (Ed.)