skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Dronevision: An Experimental 3D Testbed for Flying Light Specks
Today's robotic laboratories for drones are housed in a large room. At times, they are the size of a warehouse. These spaces are typically equipped with permanent devices to localize the drones, e.g., Vicon Infrared cameras. Significant time is invested to fine-tune the localization apparatus to compute and control the position of the drones. One may use these laboratories to develop a 3D multimedia system with miniature sized drones configured with light sources. As an alternative, this brave new idea paper envisions shrinking these room-sized laboratories to the size of a cube or cuboid that sits on a desk and costs less than 10K dollars. The resulting Dronevision (DV) will be the size of a 1990s Television. In addition to light sources, its Flying Light Specks (FLSs) will be network-enabled drones with storage and processing capability to implement decentralized algorithms. The DV will include a localization technique to expedite development of 3D displays. It will act as a haptic interface for a user to interact with and manipulate the 3D virtual illuminations. It will empower an experimenter to design, implement, test, debug, and maintain software and hardware that realize novel algorithms in the comfort of their office without having to reserve a laboratory. In addition to enhancing productivity, it will improve safety of the experimenter by minimizing the likelihood of accidents. This paper introduces the concept of a DV, the research agenda one may pursue using this device, and our plans to realize one.  more » « less
Award ID(s):
2232382
PAR ID:
10477491
Author(s) / Creator(s):
; ; ; ; ; ; ;
Corporate Creator(s):
Editor(s):
Ghandeharizadeh S.
Publisher / Repository:
First International Conference on Holodecks
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Ghandeharizadeh S. (Ed.)
    We present flight patterns for a collision-free passage of swarms of drones through one or more openings. The narrow openings provide drones with access to an infrastructure component such as charging stations to charge their depleted batteries and hangars for storage. The flight patterns are a staging area (queues) that match the rate at which an infrastructure component and its openings consume drones. They prevent collisions and may implement different policies that control the order in which drones pass through an opening. We illustrate the flight patterns with a 3D display that uses drones configured with light sources to illuminate shapes. 
    more » « less
  2. Ghandeharizadeh S. (Ed.)
    Swarm-Merging, SwarMer, is a decentralized framework to localize Flying Light Specks (FLSs) to render 2D and 3D shapes. An FLS is a miniature sized drone equipped with one or more light sources to generate different colors and textures with adjustable brightness. It is battery powered, network enabled with storage and processing capability to implement a decentralized algorithm such as SwarMer. An FLS is unable to render a shape by itself. SwarMer uses the inter-FLS relationship effect of its organizational framework to compensate for the simplicity of each individual FLS, enabling a swarm of cooperating FLSs to render complex shapes. SwarMer is resilient to network packet loss, FLSs failing, and FLSs leaving to charge their battery. It is fast, highly accurate, and scales to remain effective when a shape consists of a large number of FLSs. 
    more » « less
  3. Swarical, a Swarm-based hierarchical localization technique, enables miniature drones, Flying Light Specks (FLSs), to accurately and efficiently localize and illuminate complex 2D and 3D shapes. Its accuracy depends on the physical hardware (sensors) of FLSs used to track neighboring FLSs to localize themselves. It uses the specification of the sensors to convert mesh files into point clouds that enable a swarm of FLSs to localize at the highest accuracy afforded by their sensors. Swarical considers a heterogeneous mix of FLSs with different orientations for their tracking sensors, ensuring a line of sight between a localizing FLS and its anchor FLS. We present an implementation using Raspberry cameras and ArUco markers. A comparison of Swarical with a state of the art decentralized localization technique shows that it is as accurate and more than 2x faster. 
    more » « less
  4. We present the design, implementation, and evaluation of MiFly, a self-localization system for autonomous drones that works across indoor and outdoor environments, including low-visibility, dark, and GPS-denied settings. MiFly performs 6DoF self-localization by leveraging a single millimeter-wave (mmWave) anchor in its vicinity- even if that anchor is visually occluded. MiFly’s core contribution is in its joint design of a mmWave anchor and localization algorithm. The lowpower anchor features a novel dual-polarization dual-modulation architecture, which enables single-shot 3D localization. MmWave radars mounted on the drone perform 3D localization relative to the anchor and fuse this data with the drone’s internal inertial measurement unit (IMU) to estimate its 6DoF trajectory. We implemented and evaluated MiFly on a DJI drone. We collected over 6,600 localization estimates across different trajectory patterns and demonstrate a median localization error of 7 cm and a 90th percentile less than 15 cm, even in low-light conditions and when the anchor is fully occluded (visually) from the drone. Demo video: youtu.be/LfXfZ26tEok 
    more » « less
  5. Recovering rigid registration between successive camera poses lies at the heart of 3D reconstruction, SLAM and visual odometry. Registration relies on the ability to compute discriminative 2D features in successive camera images for determining feature correspondences, which is very challenging in feature-poor environments, i.e. low-texture and/or low-light environments. In this paper, we aim to address the challenge of recovering rigid registration between successive camera poses in feature-poor environments in a Visual Inertial Odometry (VIO) setting. In addition to inertial sensing, we instrument a small aerial robot with an RGBD camera and propose a framework that unifies the incorporation of 3D geometric entities: points, lines, and planes. The tracked 3D geometric entities provide constraints in an Extended Kalman Filtering framework. We show that by directly exploiting 3D geometric entities, we can achieve improved registration. We demonstrate our approach on different texture-poor environments, with some containing only flat texture-less surfaces providing essentially no 2D features for tracking. In addition, we evaluate how the addition of different 3D geometric entities contributes to improved pose estimation by comparing an estimated pose trajectory to a ground truth pose trajectory obtained from a motion capture system. We consider computationally efficient methods for detecting 3D points, lines and planes, since our goal is to implement our approach on small mobile robots, such as drones. 
    more » « less