skip to main content


Title: Machine-Supported Bridge Inspection Image Documentation Using Artificial Intelligence
The purpose of a routine bridge inspection is to assess the physical and functional condition of a bridge according to a regularly scheduled interval. The Federal Highway Administration (FHWA) requires these inspections to be conducted at least every 2 years. Inspectors use simple tools and visual inspection techniques to determine the conditions of both the elements of the bridge structure and the bridge overall. While in the field, the data is collected in the form of images and notes; after the field work is complete, inspectors need to generate a report based on these data to document their findings. The report generation process includes several tasks: (1) evaluating the condition rating of each bridge element according to FHWA Recording and Coding Guide for Structure Inventory and Appraisal of the Nation’s Bridges; and (2) updating and organizing the bridge inspection images for the report. Both of tasks are time-consuming. This study focuses on assisting with the latter task by developing an artificial intelligence (AI)-based method to rapidly organize bridge inspection images and generate a report. In this paper, an image organization schema based on the FHWA Recording and Coding Guide for the Structure Inventory and Appraisal of the Nation’s Bridges and the Manual for Bridge Element Inspection is described, and several convolutional neural network-based classifiers are trained with real inspection images collected in the field. Additionally, exchangeable image file (EXIF) information is automatically extracted to organize inspection images according to their time stamp. Finally, the Automated Bridge Image Reporting Tool (ABIRT) is described as a browser-based system built on the trained classifiers. Inspectors can directly upload images to this tool and rapidly obtain organized images and associated inspection report with the support of a computer which has an internet connection. The authors provide recommendations to inspectors for gathering future images to make the best use of this tool.  more » « less
Award ID(s):
1835473
NSF-PAR ID:
10384636
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Transportation Research Record: Journal of the Transportation Research Board
ISSN:
0361-1981
Page Range / eLocation ID:
036119812211358
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The Federal Highway Administration (FHWA) mandates biannual bridge inspections to assess the condition of all bridges in the United States. These inspections are recorded in the National Bridge Inventory (NBI) and the respective state’s databases to manage, study, and analyze the data. As FHWA specifications become more complex, inspections require more training and field time. Recently, element-level inspections were added, assigning a condition state to each minor element in the bridge. To address this new requirement, a machine-aided bridge inspection method was developed using artificial intelligence (AI) to assist inspectors. The proposed method focuses on the condition state assessment of cracking in reinforced concrete bridge deck elements. The deep learning-based workflow integrated with image classification and semantic segmentation methods is utilized to extract information from images and evaluate the condition state of cracks according to FHWA specifications. The new workflow uses a deep neural network to extract information required by the bridge inspection manual, enabling the determination of the condition state of cracks in the deck. The results of experimentation demonstrate the effectiveness of this workflow for this application. The method also balances the costs and risks associated with increasing levels of AI involvement, enabling inspectors to better manage their resources. This AI-based method can be implemented by asset owners, such as Departments of Transportation, to better serve communities. 
    more » « less
  2. null (Ed.)
    Bridge inspection is an important step in preserving and rehabilitating transportation infrastructure for extending their service lives. The advancement of mobile robotic technology allows the rapid collection of a large amount of inspection video data. However, the data are mainly the images of complex scenes, wherein a bridge of various structural elements mix with a cluttered background. Assisting bridge inspectors in extracting structural elements of bridges from the big complex video data, and sorting them out by classes, will prepare inspectors for the element-wise inspection to determine the condition of bridges. This article is motivated to develop an assistive intelligence model for segmenting multiclass bridge elements from the inspection videos captured by an aerial inspection platform. With a small initial training dataset labeled by inspectors, a Mask Region-based Convolutional Neural Network pre-trained on a large public dataset was transferred to the new task of multiclass bridge element segmentation. Besides, the temporal coherence analysis attempts to recover false negatives and identify the weakness that the neural network can learn to improve. Furthermore, a semi-supervised self-training method was developed to engage experienced inspectors in refining the network iteratively. Quantitative and qualitative results from evaluating the developed deep neural network demonstrate that the proposed method can utilize a small amount of time and guidance from experienced inspectors (3.58 h for labeling 66 images) to build the network of excellent performance (91.8% precision, 93.6% recall, and 92.7% f1-score). Importantly, the article illustrates an approach to leveraging the domain knowledge and experiences of bridge professionals into computational intelligence models to efficiently adapt the models to varied bridges in the National Bridge Inventory. 
    more » « less
  3. null (Ed.)
    In the aftermath of earthquake events, reconnaissance teams are deployed to gather vast amounts of images, moving quickly to capture perishable data to document the performance of infrastructure before they are destroyed. Learning from such data enables engineers to gain new knowledge about the real-world performance of structures. This new knowledge, extracted from such visual data, is critical to mitigate the risks (e.g., damage and loss of life) associated with our built environment in future events. Currently, this learning process is entirely manual, requiring considerable time and expense. Thus, unfortunately, only a tiny portion of these images are shared, curated, and actually utilized. The power of computers and artificial intelligence enables a new approach to organize and catalog such visual data with minimal manual effort. Here we discuss the development and deployment of an organizational system to automate the analysis of large volumes of post-disaster visual data, images. Our application, named the Automated Reconnaissance Image Organizer (ARIO), allows a field engineer to rapidly and automatically categorize their reconnaissance images. ARIO exploits deep convolutional neural networks and trained classifiers, and yields a structured report combined with useful metadata. Classifiers are trained using our ground-truth visual database that includes over 140,000 images from past earthquake reconnaissance missions to study post-disaster buildings in the field. Here we discuss the novel deployment of the ARIO application within a cloud-based system that we named VISER (Visual Structural Expertise Replicator), a comprehensive cloud-based visual data analytics system with a novel Netflix-inspired technical search capability. Field engineers can exploit this research and our application to search an image repository for visual content. We anticipate that these tools will empower engineers to more rapidly learn new lessons from earthquakes using reconnaissance data. 
    more » « less
  4. null (Ed.)
    Corrosion on steel bridge members is one of the most important bridge deficiencies that must be carefully monitored by inspectors. Human visual inspection is typically conducted first, and additional measures such as tapping bolts and measuring section losses can be used to assess the level of corrosion. This process becomes a challenge when some of the connections are placed in a location where inspectors have to climb up or down the steel members. To assist this inspection process, we developed a computervision based Unmanned Aerial Vehicle (UAV) system for monitoring the health of critical steel bridge connections (bolts, rivets, and pins). We used a UAV to collect images from a steel truss bridge. Then we fed the collected datasets into an instance level segmentation model using a region-based convolutional neural network to train characteristics of corrosion shown at steel connections with sets of labeled image data. The segmentation model identified locations of the connections in images and efficiently detected the members with corrosion on them. We evaluated the model based on how precisely it can detect rivets, bolts, pins, and corrosion damage on these members. The results showed robustness and practicality of our system which can also provide useful health information to bridge owners for future maintenance. These collected image data can be used to quantitatively track temporal changes and to monitor progression of damage in aging steel structures. Furthermore, the system can also assist inspectors in making decisions for further detailed inspections. 
    more » « less
  5. null (Ed.)
    The C+ score for US bridges on the 2017 infrastructure report card underscores the need for improved data-driven methods to understand bridge performance. There is a lot of interest and prior work in using inspection records to determine bridge health scores. However, aggregating, cleaning, and analyzing bridge inspection records from all states and all past years is a challenging task, limiting the access and reproducibility of findings. This research introduces a new score computed using inspection records from the National Bridge Inventory (NBI) data set. Differences between the time series of condition ratings for a bridge and a time series of average national condition ratings by age are used to develop a health score for that bridge. This baseline difference score complements NBI condition ratings in further understanding a bridge’s performance over time. Moreover, the role of bridge attributes and environmental factors can be analyzed using the score. Such analysis shows that bridge material type has the highest association with the baseline difference score, followed by snowfall and maintenance. This research also makes a methodological contribution by outlining a data-driven approach to repeatable and scalable analysis of the NBI data set. 
    more » « less