skip to main content

Title: Machine Learning Vision and Nonlinear Control Approach for Autonomous Ship Landing of Vertical Flight Aircraft
The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are more » conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. « less
Authors:
Award ID(s):
1946890
Publication Date:
NSF-PAR ID:
10318628
Journal Name:
77th Annual National Forum of the Vertical Flight Society
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, trackingmore »capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL.« less
  2. The paper discusses a deep reinforcement learning (RL) control strategy for fully autonomous vision-based approach and landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs) on ships in the presence of disturbances such as wind gusts. The automation closely follows the Navy helicopter ship landing procedure and therefore, it detects a horizon bar that is installed on most Navy ships as a visual aid for pilots by applying uniquely developed computer vision techniques. The vision system utilizes the detected corners of the horizon bar and its known dimensions to estimate the relative position and heading angle of the aircraft. A deep RL-based controller was coupled with the vision system to ensure a safe and robust approach and landing at the proximity of the ship where the airflow is highly turbulent. The vision and RL-based control system was implemented on a quadrotor UAV and flight tests were conducted where the UAV approached and landed on a sub-scale ship platform undergoing 6 degrees of freedom deck motions in the presence of wind gusts. Simulations and flight tests confirmed the superior disturbance rejection capability of the RL controller when subjected to sudden 5 m/s wind gusts in different directions. Specifically,more »it was observed during flight tests that the deep RL controller demonstrated a 50% reduction in lateral drift from the flight path and 3 times faster disturbance rejection in comparison to a nonlinear proportional-integral-derivative controller.« less
  3. An integrated sensing approach that fuses vision and range information to land an autonomous class 1 unmanned aerial system (UAS) controlled by e-modification model reference adaptive control is presented. The navigation system uses a feature detection algorithm to locate features and compute the corresponding range vectors on a coarsely instrumented landing platform. The relative translation and rotation state is estimated and sent to the flight computer for control feedback. A robust adaptive control law that guarantees uniform ultimate boundedness of the adaptive gains in the presence of bounded external disturbances is used to control the flight vehicle. Experimental flight tests are conducted to validate the integration of these systems and measure the quality of result from the navigation solution. Robustness of the control law amidst flight disturbances and hardware failures is demonstrated. The research results demonstrate the utility of low-cost, low-weight navigation solutions for navigation of small, autonomous UAS to carryout littoral proximity operations about unprepared shipdecks.
  4. The ocean is a vast three-dimensional space that is poorly explored and understood, and harbors unobserved life and processes that are vital to ecosystem function. To fully interrogate the space, novel algorithms and robotic platforms are required to scale up observations. Locating animals of interest and extended visual observations in the water column are particularly challenging objectives. Towards that end, we present a novel Machine Learning-integrated Tracking (or ML-Tracking) algorithm for underwater vehicle control that builds on the class of algorithms known as tracking-by-detection. By coupling a multi-object detector (trained on in situ underwater image data), a 3D stereo tracker, and a supervisor module to oversee the mission, we show how ML-Tracking can create robust tracks needed for long duration observations, as well as enable fully automated acquisition of objects for targeted sampling. Using a remotely operated vehicle as a proxy for an autonomous underwater vehicle, we demonstrate continuous input from the ML-Tracking algorithm to the vehicle controller during a record, 5+ hr continuous observation of a midwater gelatinous animal known as a siphonophore. These efforts clearly demonstrate the potential that tracking-by-detection algorithms can have on exploration in unexplored environments and discovery of undiscovered life in our ocean.
  5. Vision serves as an essential sensory input for insects but consumes substantial energy resources. The cost to support sensitive photoreceptors has led many insects to develop high visual acuity in only small retinal regions and evolve to move their visual systems independent of their bodies through head motion. By understanding the trade-offs made by insect vision systems in nature, we can design better vision systems for insect-scale robotics in a way that balances energy, computation, and mass. Here, we report a fully wireless, power-autonomous, mechanically steerable vision system that imitates head motion in a form factor small enough to mount on the back of a live beetle or a similarly sized terrestrial robot. Our electronics and actuator weigh 248 milligrams and can steer the camera over 60° based on commands from a smartphone. The camera streams “first person” 160 pixels–by–120 pixels monochrome video at 1 to 5 frames per second (fps) to a Bluetooth radio from up to 120 meters away. We mounted this vision system on two species of freely walking live beetles, demonstrating that triggering image capture using an onboard accelerometer achieves operational times of up to 6 hours with a 10–milliamp hour battery. We also built amore »small, terrestrial robot (1.6 centimeters by 2 centimeters) that can move at up to 3.5 centimeters per second, support vision, and operate for 63 to 260 minutes. Our results demonstrate that steerable vision can enable object tracking and wide-angle views for 26 to 84 times lower energy than moving the whole robot.

    « less