skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Deep Reinforcement Learning Control Strategy for Vision-based Ship Landing of Vertical Flight Aircraft
The paper discusses a deep reinforcement learning (RL) control strategy for fully autonomous vision-based approach and landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs) on ships in the presence of disturbances such as wind gusts. The automation closely follows the Navy helicopter ship landing procedure and therefore, it detects a horizon bar that is installed on most Navy ships as a visual aid for pilots by applying uniquely developed computer vision techniques. The vision system utilizes the detected corners of the horizon bar and its known dimensions to estimate the relative position and heading angle of the aircraft. A deep RL-based controller was coupled with the vision system to ensure a safe and robust approach and landing at the proximity of the ship where the airflow is highly turbulent. The vision and RL-based control system was implemented on a quadrotor UAV and flight tests were conducted where the UAV approached and landed on a sub-scale ship platform undergoing 6 degrees of freedom deck motions in the presence of wind gusts. Simulations and flight tests confirmed the superior disturbance rejection capability of the RL controller when subjected to sudden 5 m/s wind gusts in different directions. Specifically, it was observed during flight tests that the deep RL controller demonstrated a 50% reduction in lateral drift from the flight path and 3 times faster disturbance rejection in comparison to a nonlinear proportional-integral-derivative controller.  more » « less
Award ID(s):
1946890
PAR ID:
10318624
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
AIAA AVIATION 2021 FORUM
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL. 
    more » « less
  2. The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. 
    more » « less
  3. We propose a predictive runtime monitoring approach for linear systems with stochastic disturbances. The goal of the monitor is to decide if there exists a possible sequence of control inputs over a given time horizon to ensure that a safety property is maintained with a sufficiently high probability. We derive an efficient algorithm for performing the predictive monitoring in real time, specifically for linear time invariant (LTI) systems driven by stochastic disturbances. The algorithm implicitly defines a control envelope set such that if the current control input to the system lies in this set, there exists a future strategy over a time horizon consisting of the next N steps to guarantee the safety property of interest. As a result, the proposed monitor is oblivious of the actual controller, and therefore, applicable even in the presence of complex control systems including highly adaptive controllers. Furthermore, we apply our proposed approach to monitor whether a UAV will respect a “geofence” defined by a geographical region over which the vehicle may operate. To achieve this, we construct a data-driven linear model of the UAVs dynamics, while carefully modeling the uncertainties due to wind, GPS errors and modeling errors as time-varying disturbances. Using realistic data obtained from flight tests, we demonstrate the advantages and drawbacks of the predictive monitoring approach. 
    more » « less
  4. An integrated sensing approach that fuses vision and range information to land an autonomous class 1 unmanned aerial system (UAS) controlled by e-modification model reference adaptive control is presented. The navigation system uses a feature detection algorithm to locate features and compute the corresponding range vectors on a coarsely instrumented landing platform. The relative translation and rotation state is estimated and sent to the flight computer for control feedback. A robust adaptive control law that guarantees uniform ultimate boundedness of the adaptive gains in the presence of bounded external disturbances is used to control the flight vehicle. Experimental flight tests are conducted to validate the integration of these systems and measure the quality of result from the navigation solution. Robustness of the control law amidst flight disturbances and hardware failures is demonstrated. The research results demonstrate the utility of low-cost, low-weight navigation solutions for navigation of small, autonomous UAS to carryout littoral proximity operations about unprepared shipdecks. 
    more » « less
  5. Abstract Uncrewed aerial vehicles are integral to a smart city framework, but the dynamic environments above and within urban settings are dangerous for autonomous flight. Wind gusts caused by the uneven landscape jeopardize safe and effective aircraft operation. Birds rapidly reject gusts by changing their wing shape, but current gust alleviation methods for aircraft still use discrete control surfaces. Additionally, modern gust alleviation controllers challenge small uncrewed aerial vehicle power constraints by relying on extensive sensing networks and computationally expensive modeling. Here we show end-to-end deep reinforcement learning forgoing state inference to efficiently alleviate gusts on a smart material camber-morphing wing. In a series of wind tunnel gust experiments at the University of Michigan, trained controllers reduced gust impact by 84% from on-board pressure signals. Notably, gust alleviation using signals from only three pressure taps was statistically indistinguishable from using six pressure tap signals. By efficiently rejecting environmental perturbations, reduced-sensor fly-by-feel controllers open the door to small uncrewed aerial vehicle missions in cities. 
    more » « less