skip to main content


Title: Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs
The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL.  more » « less
Award ID(s):
1946890
NSF-PAR ID:
10318627
Author(s) / Creator(s):
Date Published:
Journal Name:
Journal of the American Helicopter Society
ISSN:
0002-8711
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. 
    more » « less
  2. The paper discusses a deep reinforcement learning (RL) control strategy for fully autonomous vision-based approach and landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs) on ships in the presence of disturbances such as wind gusts. The automation closely follows the Navy helicopter ship landing procedure and therefore, it detects a horizon bar that is installed on most Navy ships as a visual aid for pilots by applying uniquely developed computer vision techniques. The vision system utilizes the detected corners of the horizon bar and its known dimensions to estimate the relative position and heading angle of the aircraft. A deep RL-based controller was coupled with the vision system to ensure a safe and robust approach and landing at the proximity of the ship where the airflow is highly turbulent. The vision and RL-based control system was implemented on a quadrotor UAV and flight tests were conducted where the UAV approached and landed on a sub-scale ship platform undergoing 6 degrees of freedom deck motions in the presence of wind gusts. Simulations and flight tests confirmed the superior disturbance rejection capability of the RL controller when subjected to sudden 5 m/s wind gusts in different directions. Specifically, it was observed during flight tests that the deep RL controller demonstrated a 50% reduction in lateral drift from the flight path and 3 times faster disturbance rejection in comparison to a nonlinear proportional-integral-derivative controller. 
    more » « less
  3. Machine learning driven image-based controllers allow robotic systems to take intelligent actions based on the visual feedback from their environment. Understanding when these controllers might lead to system safety violations is important for their integration in safety-critical applications and engineering corrective safety measures for the system. Existing methods leverage simulation-based testing (or falsification) to find the failures of vision-based controllers, i.e., the visual inputs that lead to closed-loop safety violations. However, these techniques do not scale well to the scenarios involving high-dimensional and complex visual inputs, such as RGB images. In this work, we cast the problem of finding closed-loop vision failures as a Hamilton-Jacobi (HJ) reachability problem. Our approach blends simulation-based analysis with HJ reachability methods to compute an approximation of the backward reachable tube (BRT) of the system, i.e., the set of unsafe states for the system under vision-based controllers. Utilizing the BRT, we can tractably and systematically find the system states and corresponding visual inputs that lead to closed-loop failures. These visual inputs can be subsequently analyzed to find the input characteristics that might have caused the failure. Besides its scalability to high-dimensional visual inputs, an explicit computation of BRT allows the proposed approach to capture non-trivial system failures that are difficult to expose via random simulations. We demonstrate our framework on two case studies involving an RGB image-based neural network controller for (a) autonomous indoor navigation, and (b) autonomous aircraft taxiing. 
    more » « less
  4. This paper considers the position and attitude tracking control problem of a vertical take-off and landing unmanned aerial vehicle with uncertainty and input constraints. Considering the parametric and non-parametric uncertainties in the dynamics of systems, a robust adaptive tracking controller is proposed with the aid of the special structure of the dynamics of the system. Considering the uncertainty and input constraints, a robust adaptive saturation controller is proposed with the aid of an auxiliary compensated system. Simulation results show the effectiveness of the proposed algorithms. 
    more » « less
  5. This study proposes using a wavelet neural network (WNN) with a feedforward component and a model predictive controller (MPC) for online nonlinear system identification and control over a communication network. The WNN performs the online identification of the nonlinear system. The MPC uses the model to predict the future outputs of the system over an extended prediction horizon and calculates the optimal future inputs by minimizing a controller cost function. A computationally efficient formulation for the controller is presented to reduce the computational complexity of the MPC for online implementation and Lyapunov theory is used to prove the stability of the MPC. The methodology is applied to the online identification and control of an unmanned autonomous vehicle. Simulation results show that the MPC with an extended prediction horizon can effectively control the system in the presence of fixed or random network delay.

     
    more » « less