skip to main content

Title: Computer-Vision Based UAV Inspection for Steel Bridge Connections
Corrosion on steel bridge members is one of the most important bridge deficiencies that must be carefully monitored by inspectors. Human visual inspection is typically conducted first, and additional measures such as tapping bolts and measuring section losses can be used to assess the level of corrosion. This process becomes a challenge when some of the connections are placed in a location where inspectors have to climb up or down the steel members. To assist this inspection process, we developed a computervision based Unmanned Aerial Vehicle (UAV) system for monitoring the health of critical steel bridge connections (bolts, rivets, and pins). We used a UAV to collect images from a steel truss bridge. Then we fed the collected datasets into an instance level segmentation model using a region-based convolutional neural network to train characteristics of corrosion shown at steel connections with sets of labeled image data. The segmentation model identified locations of the connections in images and efficiently detected the members with corrosion on them. We evaluated the model based on how precisely it can detect rivets, bolts, pins, and corrosion damage on these members. The results showed robustness and practicality of our system which can also provide useful health more » information to bridge owners for future maintenance. These collected image data can be used to quantitatively track temporal changes and to monitor progression of damage in aging steel structures. Furthermore, the system can also assist inspectors in making decisions for further detailed inspections. « less
Authors:
; ; ;
Award ID(s):
1762034
Publication Date:
NSF-PAR ID:
10278807
Journal Name:
Structural Health Monitoring
Sponsoring Org:
National Science Foundation
More Like this
  1. Bridge inspection is an important step in preserving and rehabilitating transportation infrastructure for extending their service lives. The advancement of mobile robotic technology allows the rapid collection of a large amount of inspection video data. However, the data are mainly the images of complex scenes, wherein a bridge of various structural elements mix with a cluttered background. Assisting bridge inspectors in extracting structural elements of bridges from the big complex video data, and sorting them out by classes, will prepare inspectors for the element-wise inspection to determine the condition of bridges. This article is motivated to develop an assistive intelligencemore »model for segmenting multiclass bridge elements from the inspection videos captured by an aerial inspection platform. With a small initial training dataset labeled by inspectors, a Mask Region-based Convolutional Neural Network pre-trained on a large public dataset was transferred to the new task of multiclass bridge element segmentation. Besides, the temporal coherence analysis attempts to recover false negatives and identify the weakness that the neural network can learn to improve. Furthermore, a semi-supervised self-training method was developed to engage experienced inspectors in refining the network iteratively. Quantitative and qualitative results from evaluating the developed deep neural network demonstrate that the proposed method can utilize a small amount of time and guidance from experienced inspectors (3.58 h for labeling 66 images) to build the network of excellent performance (91.8% precision, 93.6% recall, and 92.7% f1-score). Importantly, the article illustrates an approach to leveraging the domain knowledge and experiences of bridge professionals into computational intelligence models to efficiently adapt the models to varied bridges in the National Bridge Inventory.« less
  2. Obeid, Iyad ; Picone, Joseph ; Selesnick, Ivan (Ed.)
    The Neural Engineering Data Consortium (NEDC) is developing a large open source database of high-resolution digital pathology images known as the Temple University Digital Pathology Corpus (TUDP) [1]. Our long-term goal is to release one million images. We expect to release the first 100,000 image corpus by December 2020. The data is being acquired at the Department of Pathology at Temple University Hospital (TUH) using a Leica Biosystems Aperio AT2 scanner [2] and consists entirely of clinical pathology images. More information about the data and the project can be found in Shawki et al. [3]. We currently have a Nationalmore »Science Foundation (NSF) planning grant [4] to explore how best the community can leverage this resource. One goal of this poster presentation is to stimulate community-wide discussions about this project and determine how this valuable resource can best meet the needs of the public. The computing infrastructure required to support this database is extensive [5] and includes two HIPAA-secure computer networks, dual petabyte file servers, and Aperio’s eSlide Manager (eSM) software [6]. We currently have digitized over 50,000 slides from 2,846 patients and 2,942 clinical cases. There is an average of 12.4 slides per patient and 10.5 slides per case with one report per case. The data is organized by tissue type as shown below: Filenames: tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_0a001_00123456_lvl0001_s000.svs tudp/v1.0.0/svs/gastro/000001/00123456/2015_03_05/0s15_12345/0s15_12345_00123456.docx Explanation: tudp: root directory of the corpus v1.0.0: version number of the release svs: the image data type gastro: the type of tissue 000001: six-digit sequence number used to control directory complexity 00123456: 8-digit patient MRN 2015_03_05: the date the specimen was captured 0s15_12345: the clinical case name 0s15_12345_0a001_00123456_lvl0001_s000.svs: the actual image filename consisting of a repeat of the case name, a site code (e.g., 0a001), the type and depth of the cut (e.g., lvl0001) and a token number (e.g., s000) 0s15_12345_00123456.docx: the filename for the corresponding case report We currently recognize fifteen tissue types in the first installment of the corpus. The raw image data is stored in Aperio’s “.svs” format, which is a multi-layered compressed JPEG format [3,7]. Pathology reports containing a summary of how a pathologist interpreted the slide are also provided in a flat text file format. A more complete summary of the demographics of this pilot corpus will be presented at the conference. Another goal of this poster presentation is to share our experiences with the larger community since many of these details have not been adequately documented in scientific publications. There are quite a few obstacles in collecting this data that have slowed down the process and need to be discussed publicly. Our backlog of slides dates back to 1997, meaning there are a lot that need to be sifted through and discarded for peeling or cracking. Additionally, during scanning a slide can get stuck, stalling a scan session for hours, resulting in a significant loss of productivity. Over the past two years, we have accumulated significant experience with how to scan a diverse inventory of slides using the Aperio AT2 high-volume scanner. We have been working closely with the vendor to resolve many problems associated with the use of this scanner for research purposes. This scanning project began in January of 2018 when the scanner was first installed. The scanning process was slow at first since there was a learning curve with how the scanner worked and how to obtain samples from the hospital. From its start date until May of 2019 ~20,000 slides we scanned. In the past 6 months from May to November we have tripled that number and how hold ~60,000 slides in our database. This dramatic increase in productivity was due to additional undergraduate staff members and an emphasis on efficient workflow. The Aperio AT2 scans 400 slides a day, requiring at least eight hours of scan time. The efficiency of these scans can vary greatly. When our team first started, approximately 5% of slides failed the scanning process due to focal point errors. We have been able to reduce that to 1% through a variety of means: (1) best practices regarding daily and monthly recalibrations, (2) tweaking the software such as the tissue finder parameter settings, and (3) experience with how to clean and prep slides so they scan properly. Nevertheless, this is not a completely automated process, making it very difficult to reach our production targets. With a staff of three undergraduate workers spending a total of 30 hours per week, we find it difficult to scan more than 2,000 slides per week using a single scanner (400 slides per night x 5 nights per week). The main limitation in achieving this level of production is the lack of a completely automated scanning process, it takes a couple of hours to sort, clean and load slides. We have streamlined all other aspects of the workflow required to database the scanned slides so that there are no additional bottlenecks. To bridge the gap between hospital operations and research, we are using Aperio’s eSM software. Our goal is to provide pathologists access to high quality digital images of their patients’ slides. eSM is a secure website that holds the images with their metadata labels, patient report, and path to where the image is located on our file server. Although eSM includes significant infrastructure to import slides into the database using barcodes, TUH does not currently support barcode use. Therefore, we manage the data using a mixture of Python scripts and manual import functions available in eSM. The database and associated tools are based on proprietary formats developed by Aperio, making this another important point of community-wide discussion on how best to disseminate such information. Our near-term goal for the TUDP Corpus is to release 100,000 slides by December 2020. We hope to continue data collection over the next decade until we reach one million slides. We are creating two pilot corpora using the first 50,000 slides we have collected. The first corpus consists of 500 slides with a marker stain and another 500 without it. This set was designed to let people debug their basic deep learning processing flow on these high-resolution images. We discuss our preliminary experiments on this corpus and the challenges in processing these high-resolution images using deep learning in [3]. We are able to achieve a mean sensitivity of 99.0% for slides with pen marks, and 98.9% for slides without marks, using a multistage deep learning algorithm. While this dataset was very useful in initial debugging, we are in the midst of creating a new, more challenging pilot corpus using actual tissue samples annotated by experts. The task will be to detect ductal carcinoma (DCIS) or invasive breast cancer tissue. There will be approximately 1,000 images per class in this corpus. Based on the number of features annotated, we can train on a two class problem of DCIS or benign, or increase the difficulty by increasing the classes to include DCIS, benign, stroma, pink tissue, non-neoplastic etc. Those interested in the corpus or in participating in community-wide discussions should join our listserv, nedc_tuh_dpath@googlegroups.com, to be kept informed of the latest developments in this project. You can learn more from our project website: https://www.isip.piconepress.com/projects/nsf_dpath.« less
  3. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. Inmore »recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the model on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively.« less
  4. Recent technological advances have led to an increase in the adoption of Unmanned Aerial Vehicles (UAVs) in a variety of use-case scenarios. In particular, Departments of Transportation in several states in the United States have been exploring the use of UAVs for bridge and infrastructure inspections to improve safety and reduce the costs of the inspection process. UAVs are remotely piloted from a cockpit or a ground station via radio channels. The UAV's state information and payload information are also transmitted to the cockpit/ground station via radio frequency (RF) signals. The RF channels that are commonly used by most UAVsmore »are 72-73, 902-928 and 2400-2483.5 MHz bands, which is also shared by several other communication protocols such as, WiFi and ZigBee networks, and therefore, the interference effects with the other services on the UAV's operation performance cannot be overlooked, particularly to maintain the minimum distance from the close by surfaces while flying alongside and underneath the bridges to achieve the best results. The loss of signal or even signal strength during such close flights can cause damage to the UAV. Especially while inspecting the bridges located in the urban areas that involve a lot of RF communication around due to presence of sever RC devices providing different services. Conventional Electromagnetic Compatibility (EMC) adherence requirements imposed on electronic systems are not adequate for UAVs due to their airborne nature and the presence of the other RF sources in the environment. Thus, in this work, we investigate the compliance of EMC requirements by designing and conducting field experiments to expose the UAVs to electromagnetic interference and distortions that are likely to be encountered during the UAV operation. The results of this work will enable us to assess the level of RF immunity of the general-purpose UAVs to aid in the selection of a suitable UAV platform for bridge inspection and develop safety procedures for minimizing the impact of RF interference.« less
  5. This paper evaluates the ability of two different data-driven models to detect and localize simulated structural damage in an in-service bridge for long-term structural health monitoring (SHM). Strain gauge data collected over 4 years is used to characterize the undamaged state of the bridge. The Powder Mill Bridge in Barre, Massachusetts, U.S., which has been instrumented with strain gauges since its opening in 2009, is used as a case study, and the strain gauges used in this study are located at 26 different stations throughout the bridge superstructure. A linear regression (LR) model and an artificial neural network (ANN) modelmore »are evaluated based on the following criteria: (a) the ability to accurately predict the strain at each location in the undamaged state of the bridge; (b) the ability to detect simulated structural damage to the bridge superstructure; and (c) the ability to localize simulated structural damage. Both the LR and the ANN models were able to predict the strain at the 26 stations with an average error of less than 5%, indicating that both methodologies were effective in characterizing the undamaged state of the bridge. A calibrated finite element model was then used to simulate damage to the Powder Mill Bridge for three damage scenarios: fascia girder corrosion, girder fracture, and deck delamination. The LR model proved to be just as effective as the ANN model at detecting and localizing damage. A recommended protocol is thus presented for integrating data-driven models into bridge asset management systems.« less