- Award ID(s):
- 1845225
- NSF-PAR ID:
- 10323748
- Date Published:
- Journal Name:
- ArXivorg
- ISSN:
- 2331-8422
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
In this work, we propose a novel approach for high accuracy user localization by merging tools from both millimeter wave (mmWave) imaging and communications. The key idea of the proposed solution is to leverage mmWave imaging to construct a high-resolution 3D image of the line-of-sight (LOS) and non-line-of-sight (NLOS) objects in the environment at one antenna array. Then, uplink pilot signaling with the user is used to estimate the angle-of-arrival and time-of- arrival of the dominant channel paths. By projecting the AoA and ToA information on the 3D mmWave images of the environment, the proposed solution can locate the user with a sub-centimeter accuracy. This approach has several gains. First, it allows accurate simultaneous localization and mapping (SLAM) from a single standpoint, i.e., using only one antenna array. Second, it does not require any prior knowledge of the surrounding environment. Third, it can locate NLOS users, even if their signals experience more than one reflection and without requiring an antenna array at the user. The approach is evaluated using a hardware setup and its ability to provide sub-centimeter localization accuracy is shownmore » « less
-
he pervasive operation of customer drones, or small-scale unmanned aerial vehicles (UAVs), has raised serious concerns about their privacy threats to the public. In recent years, privacy invasion events caused by customer drones have been frequently reported. Given such a fact, timely detection of invading drones has become an emerging task. Existing solutions using active radar, video or acoustic sensors are usually too costly (especially for individuals) or exhibit various constraints (e.g., requiring visual line of sight). Recent research on drone detection with passive RF signals provides an opportunity for low-cost deployment of drone detectors on commodity wireless devices. However, the state of the arts in this direction rely on line-of-sight (LOS) RF signals, which makes them only work under very constrained conditions. The support of more common scenarios, i.e., non-line-of-sight (NLOS), is still missing for low-cost solutions. In this paper, we propose a novel detection system for privacy invasion caused by customer drone. Our system is featured with accurate NLOS detection with low-cost hardware (under $50). By exploring and validating the relationship between drone motions and RF signal under the NLOS condition, we find that RF signatures of drones are somewhat “amplified” by multipaths in NLOS. Based on this observation, we design a two-step solution which first classifies received RSS measurements into LOS and NLOS categories; deep learning is then used to extract the signatures and ultimately detect the drones. Our experimental results show that LOS and NLOS signals can be identified at accuracy rates of 98.4% and 96% respectively. Our drone detection rate for NLOS condition is above 97% with a system implemented using Raspberry PI 3 B+.more » « less
-
Optical communication is of increasing interest as an alternative to acoustic communication for robots operated in underwater environments. Our previous work presented a method for LED-based Simultaneous Localization and Communication (SLAC) that uses the bearing angles, obtained in establishing line-of-sight (LOS) between two beacon nodes and a mobile robot for communication, for geometric triangulation and thus localization of the robot. In this paper, we consider the problem of optical localization in the setting of a network of beacon nodes, and specifically, how to fuse the measurements from multiple pairs of beacon nodes to obtain the target location. A sensitivity metric, which represents how sensitive the target estimate is with respect to the bearing measurement errors, is used for selecting the desired pair of beacon nodes. The proposed solution is evaluated with extensive simulation and preliminary experimentation, in a setting of three beacon nodes and one mobile node. Comparison with an average-based fusion approach and an approach using a fixed pair of beacon nodes demonstrates the efficacy of the proposed approach.more » « less
-
Light transport contains all light information between a light source and an image sensor. As an important application of light transport, dual photography has been a popular research topic, but it is challenged by long acquisition time, low signal-to-noise ratio, and the storage or processing of a large number of measurements. In this Letter, we propose a novel hardware setup that combines a flying-spot micro-electro mechanical system (MEMS) modulated projector with an event camera to implement dual photography for 3D scanning in both line-of-sight (LoS) and non-line-of-sight (NLoS) scenes with a transparent object. In particular, we achieved depth extraction from the LoS scenes and 3D reconstruction of the object in a NLoS scene using event light transport.
-
In this paper, we develop the analytical framework for a novel Wireless signal-based Sensing capability for Robotics (WSR) by leveraging a robots’ mobility in 3D space. It allows robots to primarily measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight unmapped environments and without requiring external infrastructure. We do so by capturing all of the paths that a wireless signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to enable a robot to emulate antenna arrays as it moves freely in 2D and 3D space. The small differences in the phase of the wireless signals are thus processed with knowledge of robots’ local displacement to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of (i) a framework to accommodate arbitrary 2D and 3D motion, as well as continuous mobility of both signal transmitting and receiving robots, while computing AOA profiles between them and (ii) a Cramer–Rao Bound analysis, based on antenna array theory, that provides a lower bound on the variance in AOA estimation as a function of the geometry of robot motion. This is a critical distinction with previous work on SAR-based methods that restrict robot mobility to prescribed motion patterns, do not generalize to the full 3D space, and require transmitting robots to be stationary during data acquisition periods. We show that allowing robots to use their full mobility in 3D space while performing SAR results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the robots’ motion, a computable quantity for which we derive a closed form. All analytical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms using 5 GHz WiFi. Our experimental results bolster our analytical findings, demonstrating that 3D motion provides enhanced and consistent accuracy, with a total AOA error of less than 10◦for 95% of trials. We also analytically characterize the impact of displacement estimation errors on the measured AOA and validate this theory empirically using robot displacements obtained using an off-the-shelf Intel Tracking Camera T265. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 m2environment with occlusions.