skip to main content

Title: Monitoring the earthquake response of full‐scale structures using UAV vision‐based techniques
Vision-based sensing, when utilized in conjunction with camera-equipped unmanned aerial vehicles (UAVs), has recently emerged as an effective sensing technique in a variety of civil engineering applications (e.g., construction monitoring, conditional assessment, and post-disaster reconnaissance). However, the use of these non-intrusive sensing techniques for extracting the dynamic response of structures has been restricted due to the perspective and scale distortions or image misalignments caused by the movement of the UAV and its on-board camera during flight operations. To overcome these limitations, a vision-based analysis methodology is proposed in the present study for extracting the dynamic response of structures using unmanned aerial vehicle (UAV) aerial videos. Importantly, geo-referenced targets were strategically placed on the structures and the background (stationary) region to enhance the robustness and accuracy related to image feature detection. Image processing and photogrammetric techniques are adopted in the analysis procedures first to recover the camera motion using the world-to-image correspondences of the background (stationary) targets and subsequently to extract the dynamic structural response by reprojecting the image feature of the (moving) targets attached to the structures to the world coordinates. The displacement tracking results are validated using the responses of two full-scale test structures measured by analog displacement sensors more » during a sequence of shake table tests. The high level of precision (less than 3 mm root-mean-square errors) of the vision-based structural displacement results demonstrates the effectiveness of the proposed UAV displacement tracking methodology. Additionally, the limitations and potential solutions associated with the proposed methodology for monitoring the dynamic responses of real structures are discussed. « less
; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
Structural control health monitoring
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper an Unmanned Aerial Vehicles (UAVs) - enabled dynamic multi-target tracking and data collection framework is presented. Initially, a holistic reputation model is introduced to evaluate the targets' potential in offloading useful data to the UAVs. Based on this model, and taking into account UAVs and targets tracking and sensing characteristics, a dynamic intelligent matching between the UAVs and the targets is performed. In such a setting, the incentivization of the targets to perform the data offloading is based on an effort-based pricing that the UAVs offer to the targets. The emerging optimization problem towards determining each target's optimal amount of offloaded data and the corresponding effort-based price that the UAV offers to the target, is treated as a Stackelberg game between each target and the associated UAV. The properties of existence, uniqueness and convergence to the Stackelberg Equilibrium are proven. Detailed numerical results are presented highlighting the key operational features and the performance benefits of the proposed framework.
  2. Human action recognition is an important topic in artificial intelligence with a wide range of applications including surveillance systems, search-and-rescue operations, human-computer interaction, etc. However, most of the current action recognition systems utilize videos captured by stationary cameras. Another emerging technology is the use of unmanned ground and aerial vehicles (UAV/UGV) for different tasks such as transportation, traffic control, border patrolling, wild-life monitoring, etc. This technology has become more popular in recent years due to its affordability, high maneuverability, and limited human interventions. However, there does not exist an efficient action recognition algorithm for UAV-based monitoring platforms. This paper considers UAV-based video action recognition by addressing the key issues of aerial imaging systems such as camera motion and vibration, low resolution, and tiny human size. In particular, we propose an automated deep learning-based action recognition system which includes the three stages of video stabilization using the SURF feature selection and Lucas-Kanade method, human action area detection using faster region-based convolutional neural networks (R-CNN), and action recognition. We propose a novel structure that extends and modifies the InceptionResNet-v2 architecture by combining a 3D CNN architecture and a residual network for action recognition. We achieve an average accuracy of 85.83% for themore »entire-video-level recognition when applying our algorithm to the popular UCF-ARG aerial imaging dataset. This accuracy significantly improves upon the state-of-the-art accuracy by a margin of 17%.« less
  3. Dynamic network topology can pose important challenges to communication and control protocols in networks of autonomous vehicles. For instance, maintaining connectivity is a key challenge in unmanned aerial vehicle (UAV) networks. However, tracking and computational resources of the observer module might not be sufficient for constant monitoring of all surrounding nodes in large-scale networks. In this paper, we propose an optimal measurement policy for network topology monitoring under constrained resources. To this end, We formulate the localization of multiple objects in terms of linear networked systems and solve it using Kalman filtering with intermittent observation. The proposed policy includes two sequential steps. We first find optimal measurement attempt probabilities for each target using numerical optimization methods to assign the limited number of resources among targets. The optimal resource allocation follows a waterfall-like solution to assign more resources to targets with lower measurement success probability. This provides a 10% to 60% gain in prediction accuracy. The second step is finding optimal on-off patterns for measurement attempts for each target over time. We show that a regular measurement pattern that evenly distributed resources over time outperforms the two extreme cases of using all measurement resources either in the beginning or at themore »end of the measurement cycle. Our proof is based on characterizing the fixed-point solution of the error covariance matrix for regular patterns. Extensive simulation results confirm the optimality of the most alternating pattern with up to 10-fold prediction improvement for different scenarios. These two guidelines define a general policy for target tracking under constrained resources with applications to network topology prediction of autonomous systems« less
  4. Flooding is one of the leading threats of natural disasters to human life and property, especially in densely populated urban areas. Rapid and precise extraction of the flooded areas is key to supporting emergency-response planning and providing damage assessment in both spatial and temporal measurements. Unmanned Aerial Vehicles (UAV) technology has recently been recognized as an efficient photogrammetry data acquisition platform to quickly deliver high-resolution imagery because of its cost-effectiveness, ability to fly at lower altitudes, and ability to enter a hazardous area. Different image classification methods including SVM (Support Vector Machine) have been used for flood extent mapping. In recent years, there has been a significant improvement in remote sensing image classification using Convolutional Neural Networks (CNNs). CNNs have demonstrated excellent performance on various tasks including image classification, feature extraction, and segmentation. CNNs can learn features automatically from large datasets through the organization of multi-layers of neurons and have the ability to implement nonlinear decision functions. This study investigates the potential of CNN approaches to extract flooded areas from UAV imagery. A VGG-based fully convolutional network (FCN-16s) was used in this research. The model was fine-tuned and a k-fold cross-validation was applied to estimate the performance of the modelmore »on the new UAV imagery dataset. This approach allowed FCN-16s to be trained on the datasets that contained only one hundred training samples, and resulted in a highly accurate classification. Confusion matrix was calculated to estimate the accuracy of the proposed method. The image segmentation results obtained from FCN-16s were compared from the results obtained from FCN-8s, FCN-32s and SVMs. Experimental results showed that the FCNs could extract flooded areas precisely from UAV images compared to the traditional classifiers such as SVMs. The classification accuracy achieved by FCN-16s, FCN-8s, FCN-32s, and SVM for the water class was 97.52%, 97.8%, 94.20% and 89%, respectively.« less
  5. The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, trackingmore »capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL.« less