skip to main content


Title: Intelligent Vision-based Autonomous Ship Landing of VTOL UAVs
The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL.  more » « less
Award ID(s):
1946890
NSF-PAR ID:
10318627
Author(s) / Creator(s):
Date Published:
Journal Name:
Journal of the American Helicopter Society
ISSN:
0002-8711
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The paper discusses a machine learning vision and nonlinear control approach for autonomous ship landing of vertical flight aircraft without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking, but refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking, and classical computer vision for object detection and the estimation of aircraft relative position and orientation during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system is implemented on a quad-rotor vertical take-off and landing (VTOL) capable unmanned aerial vehicle (UAV) equipped with an onboard camera and was demonstrated on a moving deck, which imitates realistic ship deck motions using a Stewart platform and a visual cue equivalent to the horizon bar. Extensive simulations and flight tests are conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy while the deck is in motion. 
    more » « less
  2. The paper discusses a deep reinforcement learning (RL) control strategy for fully autonomous vision-based approach and landing of vertical take-off and landing (VTOL) capable unmanned aerial vehicles (UAVs) on ships in the presence of disturbances such as wind gusts. The automation closely follows the Navy helicopter ship landing procedure and therefore, it detects a horizon bar that is installed on most Navy ships as a visual aid for pilots by applying uniquely developed computer vision techniques. The vision system utilizes the detected corners of the horizon bar and its known dimensions to estimate the relative position and heading angle of the aircraft. A deep RL-based controller was coupled with the vision system to ensure a safe and robust approach and landing at the proximity of the ship where the airflow is highly turbulent. The vision and RL-based control system was implemented on a quadrotor UAV and flight tests were conducted where the UAV approached and landed on a sub-scale ship platform undergoing 6 degrees of freedom deck motions in the presence of wind gusts. Simulations and flight tests confirmed the superior disturbance rejection capability of the RL controller when subjected to sudden 5 m/s wind gusts in different directions. Specifically, it was observed during flight tests that the deep RL controller demonstrated a 50% reduction in lateral drift from the flight path and 3 times faster disturbance rejection in comparison to a nonlinear proportional-integral-derivative controller. 
    more » « less
  3. This research paper introduces a unique system called ZORQ that is a combination of a game development frame- work and a gamification framework (GDGF). The ZORQ GDGF acts as a catalyst to help motivate students by increasing student engagement and success within undergraduate Computer Science (CS) education, regardless of student experience and background. The dynamic gamification elements utilized within the GDGF make it an attractive learning method for students. After col- laborative game space customization, ZORQ gameplay sees each student tasked with designing a ship movement philosophy and then implementing their own code to autonomously control the ship in an interstellar game space filled with supplies, obstacles, and enemy ships. The particulars of engagements between ships can vary greatly by semester, along with the resources/objects present in the game, depending on the collaborative customization and the independent ship strategies implemented. A preliminary Z O R Q trial was conducted over five years in an undergraduate Data Structures and Algorithms (DSA) course. The ZORQ trial is designed to fulfill the following objectives: 1) implement DSA concepts discussed within the course, 2) identify appropriate problem-solving approaches, 3) apply one or more solutions, 4) build depth with a coding language, 5) bridge the gap between limited concept assignments and large, multi-developer software systems by allowing students to build code within a larger architecture, 6) introduce students to version control, 7) illustrate the use of prior mathematics coursework in practical applications, and 8) introduce unit testing in software systems.In exit surveys, students expressed overwhelming satisfaction with this approach. More than 84% of the students surveyed found the system useful in their educational experience and saw benefit to inspecting a completed software project. 82% of the students found that Z O R Q increased software development com- prehension. 80% enjoyed using their own personal creativity in designing a ship controller, 76% found ZORQ helped them learn how to implement and use DSAs. 71% found the system engaging and found the system interaction to be clear and understandable. Observations of student performance in later courses suggest better student maturity and comprehension in preparation for proposing and implementing their own independent projects. 
    more » « less
  4. The lack of inherent security controls makes traditional Controller Area Network (CAN) buses vulnerable to Machine-In-The-Middle (MitM) cybersecurity attacks. Conventional vehicular MitM attacks involve tampering with the hardware to directly manipulate CAN bus traffic. We show, however, that MitM attacks can be realized without direct tampering of any CAN hardware. Our demonstration leverages how diagnostic applications based on RP1210 are vulnerable to Machine-In-The-Middle attacks. Test results show SAE J1939 communications, including single frame and multi-framed broadcast and on-request messages, are susceptible to data manipulation attacks where a shim DLL is used as a Machine-In-The-Middle. The demonstration shows these attacks can manipulate data that may mislead vehicle operators into taking the wrong actions. A solution is proposed to mitigate these attacks by utilizing machine authentication codes or authenticated encryption with pre-shared keys between the communicating parties. Various tradeoffs, such as communication overhead encryption time and J1939 protocol compliance, are presented while implementing the mitigation strategy. One of our key findings is that the data flowing through RP1210-based diagnostic systems are vulnerable to MitM attacks launched from the host diagnostics computer. Security models should include controls to detect and mitigate these data flows. An example of a cryptographic security control to mitigate the risk of an MitM attack was implemented and demonstrated by using the SAE J1939 DM18 message. This approach, however, utilizes over twice the bandwidth as normal communications. Sensitive data should utilize such a security control.

     
    more » « less
  5. null (Ed.)
    Abstract This work proposes vision-only navigation strategies for an autonomous underwater robot. This approach is a step towards solving the coverage path planning problem in a 3-D environment for surveying underwater structures. Given the challenging conditions of the underwater domain, it is very complicated to obtain accurate state estimates reliably. Consequently, it is a great challenge to extend known path planning or coverage techniques developed for aerial or ground robot controls. In this work, we are investigating a navigation strategy utilizing only vision to assist in covering a complex underwater structure. We propose to use a navigation strategy akin to what a human diver will execute when circumnavigating around a region of interest, in particular when collecting data from a shipwreck. The focus of this article is a step towards enabling the autonomous operation of lightweight robots near underwater wrecks in order to collect data for creating photo-realistic maps and volumetric 3-D models while at the same time avoiding collisions. The proposed method uses convolutional neural networks to learn the control commands based on the visual input. We have demonstrated the feasibility of using a system based only on vision to learn specific strategies of navigation with 80% accuracy on the prediction of control command changes. Experimental results and a detailed overview of the proposed method are discussed. 
    more » « less