skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Visual pose estimation of USV from UAV to assist drowning victims recovery
This paper presents the visual localization subsystem of the ongoing EMILY project. This project aims to assist responders with establishing contact with drowning victims as quickly as possible through the use of a 1.3-meter autonomous unmanned surface vehicle (USV). Once victims are reached, the device can serve as a flotation device to support them. An unmanned aerial vehicle (UAV) provides a live video feed to responders and aids USV's navigation by visually estimating the USV's position and orientation. The movement of the UAV and the USV along with varying lighting, distance, and perspective make the task of estimating USV's position and orientation challenging. We present two implemented solutions for reliable visual localization, the first relying on color thresholding and contour detection, and the second using histograms, back-projection, and CamShift algorithm. The visual localization system was validated in four proof-of-concept field trials. Other unsuccessful methods are discussed.  more » « less
Award ID(s):
1637214
PAR ID:
10043105
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Safety, Security, and Rescue Robotics (SSRR), 2016 IEEE International Symposium on
Page Range / eLocation ID:
147 to 153
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the field of autonomous transportation systems, the integration of Unmanned Aerial Vehicles (UAVs) in emergency response scenarios is important for enhancing the operational efficiency and the victims’ positioning. This article presents a novel Positioning, Navigation, and Timing (PNT) framework, namedHEROES, which leverages the UAV and integrated sensing and communication technologies to address the challenges in post-disaster environments. Our approach focuses on a comprehensive post-disaster scenario involving multiple victims, first responders, UAVs, and an emergency control center. HEROES enables UAVs to function as anchor nodes and facilitate the precise positioning of the victims while simultaneously collecting critical data from the disaster area. We further introduce a reinforcement learning model based on the Optimistic Q-learning with Upper Confidence Bound algorithm, enabling the victims and first responders to autonomously select the most advantageous UAV connections based on their channel gain, shadowing probability, and positional characteristics. Furthermore, HEROES is based on a satisfaction game-theoretic model to enhance the sensing, communication, and positioning functionalities. Our analysis reveals the existence of various satisfaction equilibria, including minimum efficient satisfaction equilibrium, ensuring that the UAVs meet their quality of service constraints at minimal operational costs. Extensive experimental results validate the scalability and performance of HEROES, demonstrating significant improvements over existing state-of-the-art methods in delivering PNT services during humanitarian emergencies. 
    more » « less
  2. null (Ed.)
    In this paper, an Unmanned Aerial Vehicles (UAVs) - enabled human Internet of Things (IoT) architecture is introduced to enable the rescue operations in public safety systems (PSSs). Initially, the first responders select in an autonomous manner the disaster area that they will support by considering the dynamic socio-physical changes of the surrounding environment and following a set of gradient ascent reinforcement learning algorithms. Then, the victims create coalitions among each other and the first responders at each disaster area based on the expected- maximization approach. Finally, the first responders select the UAVs that communicate with the Emergency Control Center (ECC), to which they will report the collected data from the disaster areas by adopting a set of log-linear reinforcement learning algorithms. The overall distributed UAV-enabled human Internet of Things architecture is evaluated via detailed numerical results that highlight its key operational features and the performance benefits of the proposed framework. 
    more » « less
  3. The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. 
    more » « less
  4. null (Ed.)
    Unmanned aerial vehicles (UAVs), equipped with a variety of sensors, are being used to provide actionable information to augment first responders’ situational awareness in disaster areas for urban search and rescue (SaR) operations. However, existing aerial robots are unable to sense the occluded spaces in collapsed structures, and voids buried in disaster rubble that may contain victims. In this study, we developed a framework, AiRobSim, to simulate an aerial robot to acquire both aboveground and underground information for post-disaster SaR. The integration of UAV, ground-penetrating radar (GPR), and other sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), and cameras, enables the aerial robot to provide a holistic view of the complex urban disaster areas. The robot-collected data can help locate critical spaces under the rubble to save trapped victims. The simulation framework can serve as a virtual training platform for novice users to control and operate the robot before actual deployment. Data streams provided by the platform, which include maneuver commands, robot states and environmental information, have potential to facilitate the understanding of the decision-making process in urban SaR and the training of future intelligent SaR robots. 
    more » « less
  5. This paper presents a method of tracking multiple ground targets from an unmanned aerial vehicle (UAV) in a 3D reference frame. The tracking method uses a monocular camera and makes no assumptions on the shape of the terrain or the target motion. The UAV runs two cascaded estimators. The first is an Extended Kalman Filter (EKF), which is responsible for tracking the UAV’s state, such as position and velocity relative to a fixed frame. The second estimator is an EKF that is responsible for estimating a fixed number of landmarks within the camera’s field of view. Landmarks are parameterized by a quaternion associated with bearing from the camera’s optical axis and an inverse distance parameter. The bearing quaternion allows for a minimal representation of each landmark’s direction and distance, a filter with no singularities, and a fast update rate due to few trigonometric functions. Three methods for estimating the ground target positions are demonstrated: the first uses the landmark estimator directly on the targets, the second computes the target depth with a weighted average of converged landmark depths, and the third extends the target’s measured bearing vector to intersect a ground plane approximated from the landmark estimates. Simulation results show that the third target estimation method yields the most accurate results. 
    more » « less