skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: EVPropNet: Detecting Drones By Finding Propellers For Mid-Air Landing And Following
The rapid rise of accessibility of unmanned aerial vehicles or drones pose a threat to general security and confidentiality. Most of the commercially available or custom-built drones are multi-rotors and are comprised of multiple propellers. Since these propellers rotate at a high-speed, they are generally the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur. We utilize a class of sensors that are particularly suitable for such scenarios called event cameras, which have a high temporal resolution, low-latency, and high dynamic range. In this paper, we model the geometry of a propeller and use it to generate simulated events which are used to train a deep neural network called EVPropNet to detect propellers from the data of an event camera. EVPropNet directly transfers to the real world without any fine-tuning or retraining. We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with different propeller shapes and sizes. Our network can detect propellers at a rate of 85.1% even when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power budget. To our knowledge, this is the first deep learning-based solution for detecting propellers (to detect drones). Finally, our applications also show an impressive success rate of 92% and 90% for the tracking and landing tasks respectively.  more » « less
Award ID(s):
1824198
PAR ID:
10309536
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Robotics Science and Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Drones have increasingly collaborated with human workers in some workspaces, such as warehouses. The failure of a drone flight may bring potential risks to human beings' life safety during some aerial tasks. One of the most common flight failures is triggered by damaged propellers. To quickly detect physical damage to propellers, recognise risky flights, and provide early warnings to surrounding human workers, a new and comprehensive fault diagnosis framework is presented that uses only the audio caused by propeller rotation without accessing any flight data. The diagnosis framework includes three components: leverage convolutional neural networks, transfer learning, and Bayesian optimisation. Particularly, the audio signal from an actual flight is collected and transferred into time–frequency spectrograms. First, a convolutional neural network‐based diagnosis model that utilises these spectrograms is developed to identify whether there is any broken propeller involved in a specific drone flight. Additionally, the authors employ Monte Carlo dropout sampling to obtain the inconsistency of diagnostic results and compute the mean probability score vector's entropy (uncertainty) as another factor to diagnose the drone flight. Next, to reduce data dependence on different drone types, the convolutional neural network‐based diagnosis model is further augmented by transfer learning. That is, the knowledge of a well‐trained diagnosis model is refined by using a small set of data from a different drone. The modified diagnosis model has the ability to detect the broken propeller of the second drone. Thirdly, to reduce the hyperparameters' tuning efforts and reinforce the robustness of the network, Bayesian optimisation takes advantage of the observed diagnosis model performances to construct a Gaussian process model that allows the acquisition function to choose the optimal network hyperparameters. The proposed diagnosis framework is validated via real experimental flight tests and has a reasonably high diagnosis accuracy. 
    more » « less
  2. As the drone becomes widespread in numerous crucial applications with many powerful functionalities (e.g., reconnaissance and mechanical trigger), there are increasing cases related to misused drones for unethical even criminal activities. Therefore, it is of paramount importance to identify these malicious drones and track their origins using digital forensics. Traditional drone identification techniques for forensics (e.g., RF communication, ID landmarks using a camera, etc.) require high compliance of drones. However, malicious drones will not cooperate or even spoof these identification techniques. Therefore, we present an exploration for a reliable and passive identification approach based on unique hardware traits in drones directly (e.g., analogous to the fingerprint and iris in humans) for forensics purposes. Specifically, we investigate and model the behavior of the parasitic electronic elements under RF interrogation, a particular passive parasitic response modulated by an electronic system on drones, which is distinctive and unlikely to counterfeit. Based on this theory, we design and implement DroneTrace, an end-to-end reliable and passive identification system toward digital drone forensics. DroneTrace comprises a cost-effective millimeter-wave (mmWave) probe, a software framework to extract and process parasitic responses, and a customized deep neural network (DNN)-based algorithm to analyze and identify drones. We evaluate the performance of DroneTrace with 36 commodity drones. Results show that DroneTrace can identify drones with the accuracy of over 99% and an equal error rate (EER) of 0.009, under a 0.1-second sensing time budget. Moreover, we test the reliability, robustness, and performance variation under a set of real-world circumstances, where DroneTrace maintains accuracy of over 98%. DroneTrace is resilient to various attacks and maintains functionality. At its best, DroneTrace has the capacity to identify individual drones at the scale of 104 with less than 5% error. 
    more » « less
  3. Existing approaches for autonomous control of pan-tilt-zoom (PTZ) cameras use multiple stages where object detection and localization are performed separately from the control of the PTZ mechanisms. These approaches require manual labels and suffer from performance bottlenecks due to error propagation across the multi-stage flow of information. The large size of object detection neural networks also makes prior solutions infeasible for real-time deployment in resource-constrained devices. We present an end-to-end deep reinforcement learning (RL) solution called Eagle1 to train a neural network policy that directly takes images as input to control the PTZ camera. Training reinforcement learning is cumbersome in the real world due to labeling effort, runtime environment stochasticity, and fragile experimental setups. We introduce a photo-realistic simulation framework for training and evaluation of PTZ camera control policies. Eagle achieves superior camera control performance by maintaining the object of interest close to the center of captured images at high resolution and has up to 17% more tracking duration than the state-of-the-art. Eagle policies are lightweight (90x fewer parameters than Yolo5s) and can run on embedded camera platforms such as Raspberry PI (33 FPS) and Jetson Nano (38 FPS), facilitating real-time PTZ tracking for resource-constrained environments. With domain randomization, Eagle policies trained in our simulator can be transferred directly to real-world scenarios2. 
    more » « less
  4. Abstract Recent interest in urban and regional air mobility and the need to improve the aviation industry’s emissions has motivated research and development of novel propeller-driven vehicles. These vehicles range in configuration from conventional takeoff and landing designs to complex rotorcraft that transition between vertical and horizontal flight. These designs must be optimized to ensure optimal efficiency throughout their missions, leveraging the tightly coupled nature of propeller-wing interaction. In this work, we study the NASA tiltwing concept vehicle wing with varying numbers of propellers, ranging from no propellers to five propellers evenly spaced along the wing. Using aerodynamic shape optimization, we optimize the wing shapes for each propeller-wing configuration, minimizing the wing drag. These optimizations are carried out with DAFoam, a discrete adjoint implementation of OpenFOAM, embedded within OpenMDAO and the MPhys optimization framework. The optimizations show that the lowest drag configuration is a single propeller mounted at the wing tip. Increasing the number of propellers slightly increases drag compared to the single propeller configuration. However, aerodynamic shape optimization considering propeller-wing interaction yields a negligible benefit compared to aerodynamic optimization of an isolated wing that is subsequently trimmed to a desired flight condition in the presence of a propeller. 
    more » « less
  5. The paper discusses an intelligent vision-based control solution for autonomous tracking and landing of Vertical Take-Off and Landing (VTOL) capable Unmanned Aerial Vehicles (UAVs) on ships without utilizing GPS signal. The central idea involves automating the Navy helicopter ship landing procedure where the pilot utilizes the ship as the visual reference for long-range tracking; however, refers to a standardized visual cue installed on most Navy ships called the ”horizon bar” for the final approach and landing phases. This idea is implemented using a uniquely designed nonlinear controller integrated with machine vision. The vision system utilizes machine learning based object detection for long-range ship tracking and classical computer vision for the estimation of aircraft relative position and orientation utilizing the horizon bar during the final approach and landing phases. The nonlinear controller operates based on the information estimated by the vision system and has demonstrated robust tracking performance even in the presence of uncertainties. The developed autonomous ship landing system was implemented on a quad-rotor UAV equipped with an onboard camera, and approach and landing were successfully demonstrated on a moving deck, which imitates realistic ship deck motions. Extensive simulations and flight tests were conducted to demonstrate vertical landing safety, tracking capability, and landing accuracy. The video of the real-world experiments and demonstrations is available at this URL. 
    more » « less