It is now possible to deploy swarms of drones with populations in the thousands. There is growing interest in using such swarms for defense, and it has been natural to program them with bio-mimetic motion models such as flocking or swarming. However, these motion models evolved to survive against predators, not enemies with modern firearms. This paper presents experimental data that compares the survivability of several motion models for large numbers of drones. This project tests drone swarms in Virtual Reality (VR), because it is prohibitively expensive, technically complex, and potentially dangerous to fly a large swarm of drones in a testing environment. We model the behavior of drone swarms flying along parametric paths in both tight and scattered formations. We add random motion to the general motion plan to confound path prediction and targeting. We describe an implementation of these flight paths as game levels in a VR environment. We then allow players to shoot at the drones and evaluate the difference between flocking and swarming behavior on drone survivability.
more »
« less
The Peeping Eye in the Sky
In this paper, we investigate the threat of drones equipped with recording devices, which capture videos of individuals typing on their mobile devices and extract the touch input such as passcodes from the videos. Deploying this kind of attack from the air is significantly challenging because of camera vibration and movement caused by drone dynamics and the wind. Our algorithms can estimate the motion trajectory of the touching finger, and derive the typing pattern and then touch inputs. Our experiments show that we can achieve a high success rate against both tablets and smartphones with a DJI Phantom drone from a long distance. A 2.5" NEUTRON mini drone flies outside a window and also achieves a high success rate against tablets behind the window. To the best of our knowledge, we are the first to systematically study drones revealing user inputs on mobile devices and use the finger motion trajectory alone to recover passcodes typed on mobile devices.
more »
« less
- Award ID(s):
- 1642124
- PAR ID:
- 10082819
- Date Published:
- Journal Name:
- IEEE Global Communications Conference
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The rapid rise of accessibility of unmanned aerial vehicles or drones pose a threat to general security and confidentiality. Most of the commercially available or custom-built drones are multi-rotors and are comprised of multiple propellers. Since these propellers rotate at a high-speed, they are generally the fastest moving parts of an image and cannot be directly "seen" by a classical camera without severe motion blur. We utilize a class of sensors that are particularly suitable for such scenarios called event cameras, which have a high temporal resolution, low-latency, and high dynamic range. In this paper, we model the geometry of a propeller and use it to generate simulated events which are used to train a deep neural network called EVPropNet to detect propellers from the data of an event camera. EVPropNet directly transfers to the real world without any fine-tuning or retraining. We present two applications of our network: (a) tracking and following an unmarked drone and (b) landing on a near-hover drone. We successfully evaluate and demonstrate the proposed approach in many real-world experiments with different propeller shapes and sizes. Our network can detect propellers at a rate of 85.1% even when 60% of the propeller is occluded and can run at upto 35Hz on a 2W power budget. To our knowledge, this is the first deep learning-based solution for detecting propellers (to detect drones). Finally, our applications also show an impressive success rate of 92% and 90% for the tracking and landing tasks respectively.more » « less
-
We address the problem of human action classification in drone videos. Due to the high cost of capturing and labeling large-scale drone videos with diverse actions, we present unsupervised and semi-supervised domain adaptation approaches that leverage both the existing fully annotated action recognition datasets and unannotated (or only a few annotated) videos from drones. To study the emerging problem of drone-based action recognition, we create a new dataset, NEC-DRONE, containing 5,250 videos to evaluate the task. We tackle both problem settings with 1) same and 2) different action label sets for the source (e.g., Kinectics dataset) and target domains (drone videos). We present a combination of video and instance-based adaptation methods, paired with either a classifier or an embedding-based framework to transfer the knowledge from source to target. Our results show that the proposed adaptation approach substantially improves the performance on these challenging and practical tasks. We further demonstrate the applicability of our method for learning cross-view action recognition on the Charades-Ego dataset. We provide qualitative analysis to understand the behaviors of our approaches.more » « less
-
Mobile devices typically rely on entry-point and other one-time authentication mechanisms such as a password, PIN, fingerprint, iris, or face. But these authentication types are prone to a wide attack vector and worse 1 INTRODUCTION Currently smartphones are predominantly protected a patterned password is prone to smudge attacks, and fingerprint scanning is prone to spoof attacks. Other forms of attacks include video capture and shoulder surfing. Given the increasingly important roles smartphones play in e-commerce and other operations where security is crucial, there lies a strong need of continuous authentication mechanisms to complement and enhance one-time authentication such that even if the authentication at the point of login gets compromised, the device is still unobtrusively protected by additional security measures in a continuous fashion. The research community has investigated several continuous authentication mechanisms based on unique human behavioral traits, including typing, swiping, and gait. To this end, we focus on investigating physiological traits. While interacting with hand-held devices, individuals strive to achieve stability and precision. This is because a certain degree of stability is required in order to manipulate and interact successfully with smartphones, while precision is needed for tasks such as touching or tapping a small target on the touch screen (Sitov´a et al., 2015). As a result, to achieve stability and precision, individuals tend to develop their own postural preferences, such as holding a phone with one or both hands, supporting hands on the sides of upper torso and interacting, keeping the phone on the table and typing with the preferred finger, setting the phone on knees while sitting crosslegged and typing, supporting both elbows on chair handles and typing. On the other hand, physiological traits, such as hand-size, grip strength, muscles, age, 424 Ray, A., Hou, D., Schuckers, S. and Barbir, A. Continuous Authentication based on Hand Micro-movement during Smartphone Form Filling by Seated Human Subjects. DOI: 10.5220/0010225804240431 In Proceedings of the 7th International Conference on Information Systems Security and Privacy (ICISSP 2021), pages 424-431 ISBN: 978-989-758-491-6 Copyrightc 2021 by SCITEPRESS – Science and Technology Publications, Lda. All rights reserved still, once compromised, fail to protect the user’s account and data. In contrast, continuous authentication, based on traits of human behavior, can offer additional security measures in the device to authenticate against unauthorized users, even after the entry-point and one-time authentication has been compromised. To this end, we have collected a new data-set of multiple behavioral biometric modalities (49 users) when a user fills out an account recovery form in sitting using an Android app. These include motion events (acceleration and angular velocity), touch and swipe events, keystrokes, and pattern tracing. In this paper, we focus on authentication based on motion events by evaluating a set of score level fusion techniques to authenticate users based on the acceleration and angular velocity data. The best EERs of 2.4% and 6.9% for intra- and inter-session respectively, are achieved by fusing acceleration and angular velocity using Nandakumar et al.’s likelihood ratio (LR) based score fusion.more » « less
-
Drone simulators are often used to reduce training costs and prepare operators for various ad-hoc scenarios, as well as to test the quality of algorithmic and communication aspects in collaborative scenarios. An important aspect of drone missions in simulated (as well as real life) environments is the operational lifetime of a given drone, in both solo and collaborative fleet settings. Its importance stems from the fact that the capacity of the on-board batteries in untethered (i.e., free-flying) drones determines the range and/or the length of the trajectory that a drone can travel in the course of its surveilance or delivery missions. Most of the existing simulators incorporate some kind of a consumption model based on different parameters of the drone and its flight trajectory. However, to our knowledge, the existing simulators are not capable of incorporating data obtained from actual physical measurements/observations into the consumption model. In this work, we take a first step towards enabling the (users of) drones simulator to incorporate the speed and direction of the wind into the model and monitor its impact on the battery consumption as the direction of the flight changes relative to the wind. We have also developed a proof-of-concept implementation with DJI Mavic 3 and Parrot ANAFI drones.more » « less