Natural disasters such as wildfires, landslides, and earthquakes result in obstructions on roads due to fallen trees, landslides, and rocks. Such obstructions can cause significant mobility problems for both evacuees and first responders, especially in the immediate aftermath of disasters. Unmanned Aerial Vehicles (UAVs) provide an opportunity to perform rapid and remote reconnaissance of planned routes and thus provide decision-makers with information relating to a route’s feasibility. However, detecting obstacles on roads manually is a laborious and error-prone task, especially when attention is diverted to needs that are more urgent during disaster scenarios. This paper thus proposes a computer vision and machine-learning framework to detect obstacles on a road automatically to ensure its possibility in the aftermath of disasters. The framework implements the YOLO algorithm to detect and segment roads on images from UAVs and reference images from publicly available datasets. The images retrieved from UAVs and reference images are segmented and counted pixels of the roadway for comparison of the difference in pixels to identify the obstruction on the road. In addition, the method is proposed to automatically detect obstructions found in the region of interest (ROI) only on a roadway with images and videos from UAVs. Preliminary results from test runs are presented along with the future steps for implementing a real-time UAV-based road obstruction system.
more »
« less
Computer-Aided Approach for Rapid Post-Event Visual Evaluation of a Building Façade
After a disaster strikes an urban area, damage to the façades of a building may produce dangerous falling hazards that jeopardize pedestrians and vehicles. Thus, building façades must be rapidly inspected to prevent potential loss of life and property damage. Harnessing the capacity to use new vision sensors and associated sensing platforms, such as unmanned aerial vehicles (UAVs) would expedite this process and alleviate spatial and temporal limitations typically associated with human-based inspection in high-rise buildings. In this paper, we have developed an approach to perform rapid and accurate visual inspection of building façades using images collected from UAVs. An orthophoto corresponding to any reasonably flat region on the building (e.g., a façade or building side) is automatically constructed using a structure-from-motion (SfM) technique, followed by image stitching and blending. Based on the geometric relationship between the collected images and the constructed orthophoto, high-resolution region-of-interest are automatically extracted from the collected images, enabling efficient visual inspection. We successfully demonstrate the capabilities of the technique using an abandoned building of which a façade has damaged building components (e.g., window panes or external drainage pipes).
more »
« less
- Award ID(s):
- 1645047
- PAR ID:
- 10074968
- Date Published:
- Journal Name:
- Sensors
- Volume:
- 18
- Issue:
- 9
- ISSN:
- 1424-8220
- Page Range / eLocation ID:
- 3017
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The purpose of a routine bridge inspection is to assess the physical and functional condition of a bridge according to a regularly scheduled interval. The Federal Highway Administration (FHWA) requires these inspections to be conducted at least every 2 years. Inspectors use simple tools and visual inspection techniques to determine the conditions of both the elements of the bridge structure and the bridge overall. While in the field, the data is collected in the form of images and notes; after the field work is complete, inspectors need to generate a report based on these data to document their findings. The report generation process includes several tasks: (1) evaluating the condition rating of each bridge element according to FHWA Recording and Coding Guide for Structure Inventory and Appraisal of the Nation’s Bridges; and (2) updating and organizing the bridge inspection images for the report. Both of tasks are time-consuming. This study focuses on assisting with the latter task by developing an artificial intelligence (AI)-based method to rapidly organize bridge inspection images and generate a report. In this paper, an image organization schema based on the FHWA Recording and Coding Guide for the Structure Inventory and Appraisal of the Nation’s Bridges and the Manual for Bridge Element Inspection is described, and several convolutional neural network-based classifiers are trained with real inspection images collected in the field. Additionally, exchangeable image file (EXIF) information is automatically extracted to organize inspection images according to their time stamp. Finally, the Automated Bridge Image Reporting Tool (ABIRT) is described as a browser-based system built on the trained classifiers. Inspectors can directly upload images to this tool and rapidly obtain organized images and associated inspection report with the support of a computer which has an internet connection. The authors provide recommendations to inspectors for gathering future images to make the best use of this tool.more » « less
-
The rising frequency of natural disasters demands efficient and accurate structural damage assessments to ensure public safety and expedite recovery. Human error, inconsistent standards, and safety risks limit traditional visual inspections by engineers. Although UAVs and AI have advanced post-disaster assessments, they still lack the expert knowledge and decision-making judgment of human inspectors. This study explores how expertise shapes human–building interaction during disaster inspections by using eye tracking technology to capture the gaze patterns of expert and novice inspectors. A controlled, screen-based inspection method was employed to safely gather data, which was then used to train a machine learning model for saliency map prediction. The results highlight significant differences in visual attention between experts and novices, providing valuable insights for future inspection strategies and training novice inspectors. By integrating human expertise with automated systems, this research aims to improve the accuracy and reliability of post-disaster structural assessments, fostering more effective human–machine collaboration in disaster response efforts.more » « less
-
null (Ed.)Corrosion on steel bridge members is one of the most important bridge deficiencies that must be carefully monitored by inspectors. Human visual inspection is typically conducted first, and additional measures such as tapping bolts and measuring section losses can be used to assess the level of corrosion. This process becomes a challenge when some of the connections are placed in a location where inspectors have to climb up or down the steel members. To assist this inspection process, we developed a computervision based Unmanned Aerial Vehicle (UAV) system for monitoring the health of critical steel bridge connections (bolts, rivets, and pins). We used a UAV to collect images from a steel truss bridge. Then we fed the collected datasets into an instance level segmentation model using a region-based convolutional neural network to train characteristics of corrosion shown at steel connections with sets of labeled image data. The segmentation model identified locations of the connections in images and efficiently detected the members with corrosion on them. We evaluated the model based on how precisely it can detect rivets, bolts, pins, and corrosion damage on these members. The results showed robustness and practicality of our system which can also provide useful health information to bridge owners for future maintenance. These collected image data can be used to quantitatively track temporal changes and to monitor progression of damage in aging steel structures. Furthermore, the system can also assist inspectors in making decisions for further detailed inspections.more » « less
-
null (Ed.)Image data remains an important tool for post-event building assessment and documentation. After each natural hazard event, significant efforts are made by teams of engineers to visit the affected regions and collect useful image data. In general, a global positioning system (GPS) can provide useful spatial information for localizing image data. However, it is challenging to collect such information when images are captured in places where GPS signals are weak or interrupted, such as the indoor spaces of buildings. The inability to document the images’ locations hinders the analysis, organization, and documentation of these images as they lack sufficient spatial context. In this work, we develop a methodology to localize images and link them to locations on a structural drawing. A stream of images can readily be gathered along the path taken through a building using a compact camera. These images may be used to compute a relative location of each image in a 3D point cloud model, which is reconstructed using a visual odometry algorithm. The images may also be used to create local 3D textured models for building-components-of-interest using a structure-from-motion algorithm. A parallel set of images that are collected for building assessment is linked to the image stream using time information. By projecting the point cloud model to the structural drawing, the images can be overlaid onto the drawing, providing clear context information necessary to make use of those images. Additionally, components- or damage-of-interest captured in these images can be reconstructed in 3D, enabling detailed assessments having sufficient geospatial context. The technique is demonstrated by emulating post-event building assessment and data collection in a real building.more » « less
An official website of the United States government

