skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 4, 2025

Title: The OODA Loop of Cloudlet-Based Autonomous Drones
We present a benchmark-driven experimental study of autonomous drone agility relative to edge offload pipeline attributes. This pipeline includes a monocular gimbal-actuated on-drone camera, hardware RTSP video encoding, 4G LTE wireless network transmission, and computer vision processing on a ground-based GPU-equipped cloudlet. Our parameterized and reproducible agility benchmarks stress the OODA (“Observe, Orient, Decide, Act”) loop of the drone on obstacle avoidance and object tracking tasks. We characterize the latency and throughput of components of this OODA loop through software profiling, and identify opportunities for optimization.  more » « less
Award ID(s):
2106862
PAR ID:
10588994
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-7828-3
Page Range / eLocation ID:
178 to 190
Subject(s) / Keyword(s):
edge computing, machine learning, computer vision, drones, autonomous robotics, wireless networks, benchmarks, latency, throughput
Format(s):
Medium: X
Location:
Rome, Italy
Sponsoring Org:
National Science Foundation
More Like this
  1. Autonomous drones (UAVs) have rapidly grown in popularity due to their form factor, agility, and ability to operate in harsh or hostile environments. Drone systems come in various form factors and configurations and operate under tight physical parameters. Further, it has been a significant challenge for architects and researchers to develop optimal drone designs as open-source simulation frameworks either lack the necessary capabilities to simulate a full drone flight stack or they are extremely tedious to setup with little or no maintenance or support. In this paper, we develop and present UniUAVSim, our fully open-source co-simulation framework capable of running software-in-the-loop (SITL) and hardware-in-the-loop (HITL) simulations concurrently. The paper also provides insights into the abstraction of a drone flight stack and details how these abstractions aid in creating a simulation framework which can accurately provide an optimal drone design given physical parameters and constraints. The framework was validated with real-world hardware and is available to the research community to aid in future architecture research for autonomous systems. 
    more » « less
  2. This paper introduces LiSWARM, a low-cost LiDAR system to detect and track individual drones in a large swarm. LiSWARM provides robust and precise localization and recognition of drones in 3D space, which is not possible with state-of-the-art drone tracking systems that rely on radio-frequency (RF), acoustic, or RGB image signatures. It includes (1) an efficient data processing pipeline to process the point clouds, (2) robust priority-aware clustering algorithms to isolate swarm data from the background, (3) a reliable neural network-based algorithm to recognize the drones, and (4) a technique to track the trajectory of every drone in the swarm. We develop the LiSWARM prototype and validate it through both in-lab and field experiments. Notably, we measure its performance during two drone light shows involving 150 and 500 drones and confirm that the system achieves up to 98% accuracy in recognizing drones and reliably tracking drone trajectories. To evaluate the scalability of LiSWARM, we conduct a thorough analysis to benchmark the system’s performance with a swarm consisting of 15,000 drones. The results demonstrate the potential to leverage LiSWARM for other applications, such as battlefield operations, errant drone detection, and securing sensitive areas such as airports and prisons. 
    more » « less
  3. Chen, Shi-Jie (Ed.)
    R-loops are a class of non-canonical nucleic acid structures that typically form during transcription when the nascent RNA hybridizes the DNA template strand, leaving the non-template DNA strand unpaired. These structures are abundant in nature and play important physiological and pathological roles. Recent research shows that DNA sequence and topology affect R-loops, yet it remains unclear how these and other factors contribute to R-loop formation. In this work, we investigate the link between nascent RNA folding and the formation of R-loops. We introduce tree-polynomials, a new class of representations of RNA secondary structures. A tree-polynomial representation consists of a rooted tree associated with an RNA secondary structure together with a polynomial that is uniquely identified with the rooted tree. Tree-polynomials enable accurate, interpretable and efficient data analysis of RNA secondary structures without pseudoknots. We develop a computational pipeline for investigating and predicting R-loop formation from a genomic sequence. The pipeline obtains nascent RNA secondary structures from a co-transcriptional RNA folding software, and computes the tree-polynomial representations of the structures. By applying this pipeline to plasmid sequences that contain R-loop forming genes, we establish a strong correlation between the coefficient sums of tree-polynomials and the experimental probability of R-loop formation. Such strong correlation indicates that the pipeline can be used for accurate R-loop prediction. Furthermore, the interpretability of tree-polynomials allows us to characterize the features of RNA secondary structure associated with R-loop formation. In particular, we identify that branches with short stems separated by bulges and interior loops are associated with R-loops. 
    more » « less
  4. Radar-based recognition of human activities of daily living has been a focus of research for over a decade. Current techniques focus on generalized motion recognition of any person and rely on massive amounts of data to characterize generic human activity. However, human gait is actually a person-specific biometric, correlated with health and agility, which depends on a person’s mobility ethogram. This paper proposes a multi-input multi-task deep learning framework for jointly learning a person’s agility and activity. As a proof of concept, we consider three categories of agility represented by slow, fast and nominal motion articulations and show that joint consideration of agility and activity can lead to improved activity classification accuracy and estimation of agility. To the best of our knowledge, this work represents the first work considering personalized motion recognition and agility characterization using radar. 
    more » « less
  5. Hedden, Abigail S; Mazzaro, Gregory J (Ed.)
    Human activity recognition (HAR) with radar-based technologies has become a popular research area in the past decade. However, the objective of these studies are often to classify human activity for anyone; thus, models are trained using data spanning as broad a swath of people and mobility profiles as possible. In contrast, applications of HAR and gait analysis to remote health monitoring require characterization of the person-specific qualities of a person’s activities and gait, which greatly depends on age, health and agility. In fact, the speed or agility with which a person moves can be an important health indicator. In this study, we propose a multi-input multi-task deep learning framework to simultaneously learn a person’s activity and agility. In this initial study, we consider three different agility states: slow, nominal, and fast. It is shown that joint learning of agility and activity improves the classification accuracy for both activity and agility recognition tasks. To the best of our knowledge, this study is the first work considering both agility characterization and personalized activity recognition using RF sensing. 
    more » « less