skip to main content


Title: Monitoring the earthquake response of full‐scale structures using UAV vision‐based techniques
Vision-based sensing, when utilized in conjunction with camera-equipped unmanned aerial vehicles (UAVs), has recently emerged as an effective sensing technique in a variety of civil engineering applications (e.g., construction monitoring, conditional assessment, and post-disaster reconnaissance). However, the use of these non-intrusive sensing techniques for extracting the dynamic response of structures has been restricted due to the perspective and scale distortions or image misalignments caused by the movement of the UAV and its on-board camera during flight operations. To overcome these limitations, a vision-based analysis methodology is proposed in the present study for extracting the dynamic response of structures using unmanned aerial vehicle (UAV) aerial videos. Importantly, geo-referenced targets were strategically placed on the structures and the background (stationary) region to enhance the robustness and accuracy related to image feature detection. Image processing and photogrammetric techniques are adopted in the analysis procedures first to recover the camera motion using the world-to-image correspondences of the background (stationary) targets and subsequently to extract the dynamic structural response by reprojecting the image feature of the (moving) targets attached to the structures to the world coordinates. The displacement tracking results are validated using the responses of two full-scale test structures measured by analog displacement sensors during a sequence of shake table tests. The high level of precision (less than 3 mm root-mean-square errors) of the vision-based structural displacement results demonstrates the effectiveness of the proposed UAV displacement tracking methodology. Additionally, the limitations and potential solutions associated with the proposed methodology for monitoring the dynamic responses of real structures are discussed.  more » « less
Award ID(s):
1663569
NSF-PAR ID:
10341550
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Structural control health monitoring
Volume:
29
Issue:
1
ISSN:
1545-2255
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Computer vision‐based displacement measurement for structural monitoring has grown popular. However, tracking natural low‐contrast targets in low‐illumination conditions is inevitable for vision sensors in the field measurement, which poses challenges for intensity‐based vision‐sensing techniques. A new edge‐enhanced‐matching (EEM) technique improved from the previous orientation‐code‐matching (OCM) technique is proposed to enable robust tracking of low‐contrast features. Besides extracting gradient orientations from images as OCM, the proposed EEM technique also utilizes gradient magnitudes to identify and enhance subtle edge features to form EEM images. A ranked‐segmentation filtering technique is also developed to post‐process EEM images to make it easier to identify edge features. The robustness and accuracy of EEM in tracking low‐contrast features are validated in comparison with OCM in the field tests conducted on a railroad bridge and the long‐span Manhattan Bridge. Frequency domain analyses are also performed to further validate the displacement accuracy.

     
    more » « less
  2. Abstract

    In this paper, a new framework is proposed for monitoring the dynamic performance of bridges using three different camera placements and a few visual data processing techniques at low cost and high efficiency. A deep learning method validated by an optical flow approach for motion tracking is included in the framework. To verify it, videos taken by stationary cameras of two shaking table tests were processed at first. Then, the vibrations of six pedestrian bridges were measured using structure-mounted, remote, and drone-mounted cameras, respectively. Two techniques, displacement and frequency subtractions, are applied to remove systematic motions of cameras and to capture the natural frequencies of the tested structures. Measurements on these bridges were compared with the data from wireless accelerometers and structural analysis. Influences of critical parameters for camera setting and data processing, such as video frame rates, data window size, and data sampling rates, were also studied carefully. The research results show that the vibrations and frequencies of structures on the shaking tables and existing bridges can be captured accurately with the proposed framework. These camera placements and data processing techniques can be successfully used for monitoring their dynamic performance.

     
    more » « less
  3. null (Ed.)
    In this paper an Unmanned Aerial Vehicles (UAVs) - enabled dynamic multi-target tracking and data collection framework is presented. Initially, a holistic reputation model is introduced to evaluate the targets' potential in offloading useful data to the UAVs. Based on this model, and taking into account UAVs and targets tracking and sensing characteristics, a dynamic intelligent matching between the UAVs and the targets is performed. In such a setting, the incentivization of the targets to perform the data offloading is based on an effort-based pricing that the UAVs offer to the targets. The emerging optimization problem towards determining each target's optimal amount of offloaded data and the corresponding effort-based price that the UAV offers to the target, is treated as a Stackelberg game between each target and the associated UAV. The properties of existence, uniqueness and convergence to the Stackelberg Equilibrium are proven. Detailed numerical results are presented highlighting the key operational features and the performance benefits of the proposed framework. 
    more » « less
  4. null (Ed.)
    Human action recognition is an important topic in artificial intelligence with a wide range of applications including surveillance systems, search-and-rescue operations, human-computer interaction, etc. However, most of the current action recognition systems utilize videos captured by stationary cameras. Another emerging technology is the use of unmanned ground and aerial vehicles (UAV/UGV) for different tasks such as transportation, traffic control, border patrolling, wild-life monitoring, etc. This technology has become more popular in recent years due to its affordability, high maneuverability, and limited human interventions. However, there does not exist an efficient action recognition algorithm for UAV-based monitoring platforms. This paper considers UAV-based video action recognition by addressing the key issues of aerial imaging systems such as camera motion and vibration, low resolution, and tiny human size. In particular, we propose an automated deep learning-based action recognition system which includes the three stages of video stabilization using the SURF feature selection and Lucas-Kanade method, human action area detection using faster region-based convolutional neural networks (R-CNN), and action recognition. We propose a novel structure that extends and modifies the InceptionResNet-v2 architecture by combining a 3D CNN architecture and a residual network for action recognition. We achieve an average accuracy of 85.83% for the entire-video-level recognition when applying our algorithm to the popular UCF-ARG aerial imaging dataset. This accuracy significantly improves upon the state-of-the-art accuracy by a margin of 17%. 
    more » « less
  5. Dynamic network topology can pose important challenges to communication and control protocols in networks of autonomous vehicles. For instance, maintaining connectivity is a key challenge in unmanned aerial vehicle (UAV) networks. However, tracking and computational resources of the observer module might not be sufficient for constant monitoring of all surrounding nodes in large-scale networks. In this paper, we propose an optimal measurement policy for network topology monitoring under constrained resources. To this end, We formulate the localization of multiple objects in terms of linear networked systems and solve it using Kalman filtering with intermittent observation. The proposed policy includes two sequential steps. We first find optimal measurement attempt probabilities for each target using numerical optimization methods to assign the limited number of resources among targets. The optimal resource allocation follows a waterfall-like solution to assign more resources to targets with lower measurement success probability. This provides a 10% to 60% gain in prediction accuracy. The second step is finding optimal on-off patterns for measurement attempts for each target over time. We show that a regular measurement pattern that evenly distributed resources over time outperforms the two extreme cases of using all measurement resources either in the beginning or at the end of the measurement cycle. Our proof is based on characterizing the fixed-point solution of the error covariance matrix for regular patterns. Extensive simulation results confirm the optimality of the most alternating pattern with up to 10-fold prediction improvement for different scenarios. These two guidelines define a general policy for target tracking under constrained resources with applications to network topology prediction of autonomous systems 
    more » « less