skip to main content


Title: Learning-to-Fly: Learning-based Collision Avoidance for Scalable Urban Air Mobility
With increasing urban population, there is global interest in Urban Air Mobility (UAM), where hundreds of autonomous Unmanned Aircraft Systems (UAS) execute missions in the airspace above cities. Unlike traditional human-inthe-loop air traffic management, UAM requires decentralized autonomous approaches that scale for an order of magnitude higher aircraft densities and are applicable to urban settings. We present Learning-to-Fly (L2F), a decentralized on-demand airborne collision avoidance framework for multiple UAS that allows them to independently plan and safely execute missions with spatial, temporal and reactive objectives expressed using Signal Temporal Logic. We formulate the problem of predictively avoiding collisions between two UAS without violating mission objectives as a Mixed Integer Linear Program (MILP). This however is intractable to solve online. Instead, we develop L2F, a two-stage collision avoidance method that consists of: 1) a learning-based decision-making scheme and 2) a distributed, linear programming-based UAS control algorithm. Through extensive simulations, we show the real-time applicability of our method which is ≈6000× faster than the MILP approach and can resolve 100% of collisions when there is ample room to maneuver, and shows graceful degradation in performance otherwise. We also compare L2F to two other methods and demonstrate an implementation on quad-rotor robots.  more » « less
Award ID(s):
1925587
NSF-PAR ID:
10222308
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC)
Page Range / eLocation ID:
1 to 8
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Urban Air Mobility, the scenario where hundreds of manned and Unmanned Aircraft Systems (UASs) carry out a wide variety of missions (e.g., moving humans and goods within the city), is gaining acceptance as a transportation solution of the future. One of the key requirements for this to happen is safely managing the air traffic in these urban airspaces. Due to the expected density of the airspace, this requires fast autonomous solutions that can be deployed online. We propose Learning-‘N-Flying (LNF), a multi-UAS Collision Avoidance (CA) framework. It is decentralized, works on the fly, and allows autonomous Unmanned Aircraft System (UAS)s managed by different operators to safely carry out complex missions, represented using Signal Temporal Logic, in a shared airspace. We initially formulate the problem of predictive collision avoidance for two UASs as a mixed-integer linear program, and show that it is intractable to solve online. Instead, we first develop Learning-to-Fly (L2F) by combining (1) learning-based decision-making and (2) decentralized convex optimization-based control. LNF extends L2F to cases where there are more than two UASs on a collision path. Through extensive simulations, we show that our method can run online (computation time in the order of milliseconds) and under certain assumptions has failure rates of less than 1% in the worst case, improving to near 0% in more relaxed operations. We show the applicability of our scheme to a wide variety of settings through multiple case studies. 
    more » « less
  2. Urban air mobility (UAM) using unmanned aerial vehicles (UAV) is an emerging way of air transportation within metropolitan areas. For the sake of the successful operations of UAM in dynamic and uncertain airspace environments, it is important to provide safe path planning for UAVs. To achieve the path planning with safety assurance, the first step is to detect collisions. Due to uncertainty, especially data-driven uncertainty, it’s impossible to decide deterministically whether a collision occurs between a pair of UAVs. Instead, we are going to evaluate the probability of collision online in this paper for any general data-driven distribution. A sampling method based on kernel density estimator (KDE) is introduced to approximate the data-driven distribution of the uncertainty, and then the probability of collision can be converted to the Riemann sum of KDE values over the domain of the combined safety range. Comprehensive numerical simulations demonstrate the feasibility and eciency of the online evaluation of probabilistic collision for UAM using the proposed algorithm of collision detection. 
    more » « less
  3. With the rapid proliferation of small unmanned aircraft systems (UAS), the risk of mid-air collisions is growing, as is the risk associated with the malicious use of these systems. Airborne Detect-and-Avoid (ABDAA) and counter-UAS technologies have similar sensing requirements to detect and track airborne threats, albeit for different purposes: to avoid a collision or to neutralize a threat, respectively. These systems typically include a variety of sensors, such as electro-optical or infrared (EO/IR) cameras, RADAR, or LiDAR, and they fuse the data from these sensors to detect and track a given threat and to predict its trajectory. Camera imagery can be an effective method for detection as well as for pose estimation and threat classification, though a single camera cannot resolve range to a threat without additional information, such as knowledge of the threat geometry. To support ABDAA and counter-UAS applications, we consider a merger of two image-based sensing methods that mimic human vision: (1) a "peripheral vision" camera (i.e., with a fisheye lens) to provide a large field-of-view and (2) a "central vision" camera (i.e., with a perspective lens) to provide high resolution imagery of a specific target. Beyond the complementary ability of the two cameras to support detection and classification, the pair form a heterogeneous stereo vision system that can support range resolution. This paper describes the initial development and testing of a peripheral-central vision system to detect, localize, and classify an airborne threat and finally to predict its path using knowledge of the threat class. 
    more » « less
  4. To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed.

     
    more » « less
  5. In this work, we present a per-instant pose optimization method that can generate configurations that achieve specified pose or motion objectives as best as possible over a sequence of solutions, while also simultaneously avoiding collisions with static or dynamic obstacles in the environment. We cast our method as a weighted sum non-linear constrained optimization-based IK problem where each term in the objective function encodes a particular pose objective. We demonstrate how to effectively incorporate environment collision avoidance as a single term in this multi-objective, optimization-based IK structure, and provide solutions for how to spatially represent and organize external environments such that data can be efficiently passed to a real-time, performance-critical optimization loop. We demonstrate the effectiveness of our method by comparing it to various state-of-the-art methods in a testbed of simulation experiments and discuss the implications of our work based on our results. 
    more » « less