skip to main content

Title: WSR: A WiFi Sensor for Collaborative Robotics
In this paper we derive a new capability for robots to measure relative direction, or Angle-of-Arrival (AOA), to other robots, while operating in non-line-of-sight and unmapped environments, without requiring external infrastructure. We do so by capturing all of the paths that a WiFi signal traverses as it travels from a transmitting to a receiving robot in the team, which we term as an AOA profile. The key intuition behind our approach is to emulate antenna arrays in the air as a robot moves freely in 2D or 3D space. The small differences in the phase and amplitude of WiFi signals are thus processed with knowledge of a robots’ local displacements (often provided via inertial sensors) to obtain the profile, via a method akin to Synthetic Aperture Radar (SAR). The main contribution of this work is the development of i) a framework to accommodate arbitrary 2D and 3D trajectories, as well as continuous mobility of both transmitting and receiving robots, while computing AOA profiles between them and ii) an accompanying analysis that provides a lower bound on variance of AOA estimation as a function of robot trajectory geometry that is based on the Cramer Rao Bound and antenna array theory. This is more » a critical distinction with previous work on SAR that restricts robot mobility to prescribed motion patterns, does not generalize to the full 3D space, and/or requires transmitting robots to be static during data acquisition periods. In fact, we find that allowing robots to use their full mobility in 3D space while performing SAR, results in more accurate AOA profiles and thus better AOA estimation. We formally characterize this observation as the informativeness of the trajectory; a computable quantity for which we derive a closed form. All theoretical developments are substantiated by extensive simulation and hardware experiments on air/ground robot platforms. Our experimental results bolster our theoretical findings, demonstrating that 3D trajectories provide enhanced and consistent accuracy, with AOA error of less than 10 deg for 95% of trials. We also show that our formulation can be used with an off-the-shelf trajectory estimation sensor (Intel RealSense T265 tracking camera), for estimating the robots’ local displacements, and we provide theoretical as well as empirical results that show the impact of typical trajectory estimation errors on the measured AOA. Finally, we demonstrate the performance of our system on a multi-robot task where a heterogeneous air/ground pair of robots continuously measure AOA profiles over a WiFi link to achieve dynamic rendezvous in an unmapped, 300 square meter environment with occlusions. « less
Award ID(s):
1845225 2114733
Publication Date:
Journal Name:
Sponsoring Org:
National Science Foundation
More Like this
  1. We present simplified 2D dynamic models of the 3D, passive dynamic inspired walking gait of a physical quasi-passive walking robot. Quasi-passive walkers are robots that integrate passive walking principles and some form of actuation. Our ultimate goal is to better understand the dynamics of actuated walking in order to create miniature, untethered, bipedal walking robots. At these smaller scales there is limited space and power available, and so in this work we leverage the passive dynamics of walking to reduce the burden on the actuators and controllers. Prior quasi-passive walkers are much larger than our intended scale, have more complicatedmore »mechanical designs, and require more precise feedback control and/or learning algorithms. By leveraging the passive 3D dynamics, carefully designing the spherical feet, and changing the actuation scheme, we are able to produce a very simple 3D bipedal walking model that has a total of 5 rigid bodies and a single actuator per leg. Additionally, the model requires no feedback as each actuator is controlled by an open-loop sinusoidal profile. We validate this model in 2D simulations in which we measure the stability properties while varying the leg length/amplitude ratio, the frequency of actuation, and the spherical foot profile. These results are also validated experimentally on a 3D walking robot (15cm leg length) that implements the modeled walking dynamics. Finally, we experimentally investigate the ability to control the heading of the robot by changing the open-loop control parameters of the robot.« less
  2. Existing methods for pedestrian motion trajectory prediction are learning and predicting the trajectories in the 2D image space. In this work, we observe that it is much more efficient to learn and predict pedestrian trajectories in the 3D space since the human motion occurs in the 3D physical world and and their behavior patterns are better represented in the 3D space. To this end, we use a stereo camera system to detect and track the human pose with deep neural networks. During pose estimation, these twin deep neural networks satisfy the stereo consistence constraint. We adapt the existing SocialGAN methodmore »to perform pedestrian motion trajectory prediction from the 2D to the 3D space. Our extensive experimental results demonstrate that our proposed method significantly improves the pedestrian trajectory prediction performance, outperforming existing state-of-the-art methods.« less
  3. Reconstructing 4D vehicular activity (3D space and time) from cameras is useful for autonomous vehicles, commuters and local authorities to plan for smarter and safer cities. Traffic is inherently repetitious over long periods, yet current deep learning-based 3D reconstruction methods have not considered such repetitions and have difficulty generalizing to new intersection-installed cameras. We present a novel approach exploiting longitudinal (long-term) repetitious motion as self-supervision to reconstruct 3D vehicular activity from a video captured by a single fixed camera. Starting from off-the-shelf 2D keypoint detections, our algorithm optimizes 3D vehicle shapes and poses, and then clusters their trajectories in 3Dmore »space. The 2D keypoints and trajectory clusters accumulated over long-term are later used to improve the 2D and 3D keypoints via self-supervision without any human annotation. Our method improves reconstruction accuracy over state of the art on scenes with a significant visual difference from the keypoint detector’s training data, and has many applications including velocity estimation, anomaly detection and vehicle counting. We demonstrate results on traffic videos captured at multiple city intersections, collected using our smartphones, YouTube, and other public datasets.« less
  4. In the next wave of swarm-based applications, unmanned aerial vehicles (UAVs) need to communicate with peer drones in any direction of a three-dimensional (3D) space. On a given drone and across drones, various antenna positions and orientations are possible. We know that, in free space, high levels of signal loss are expected if the transmitting and receiving antennas are cross polarized. However, increasing the reflective and scattering objects in the channel between a transmitter and receiver can cause the received polarization to become completely independent from the transmitted polarization, making the cross-polarization of antennas insignificant. Usually, these effects are studiedmore »in the context of cellular and terrestrial networks and have not been analyzed when those objects are the actual bodies of the communicating drones that can take different relative directions or move at various elevations. In this work, we show that the body of the drone can affect the received power across various antenna orientations and positions and act as a local scatterer that increases channel depolarization, reducing the cross-polarization discrimination (XPD). To investigate these effects, we perform experimentation that is staged in terms of complexity from a controlled environment of an anechoic chamber with and without drone bodies to in-field environments where drone-mounted antennas are in-flight with various orientations and relative positions with the following outcomes: (i.) drone relative direction can significantly impact the XPD values, (ii.) elevation angle is a critical factor in 3D link performance, (iii.) antenna spacing requirements are altered for co-located cross-polarized antennas, and (iv.) cross-polarized antenna setups more than double spectral efficiency. Our results can serve as a guide for accurately simulating and modeling UAV networks and drone swarms.« less
  5. Localization is a key ability for robot navigation and collision avoidance. The advent of technologies such as GPS have led to many improvements in terrestrial navigation. Unfortunately traditional electromagnetic (EM) communications propagate poorly through lossy media such as underwater and underground. Therefore, localization remains a challenging problem in such environments, necessitating other approaches such as acoustics and magnetic induction (MI). This paper investigates estimating the relative location of a pair of MI triaxial coil antennas in air, as a preliminary step to underwater applications. By measuring the voltages induced in the receiving antenna when the transmitting antenna's coils are turnedmore »on sequentially, the distance between the antennas can be computed. Then, with knowledge of the current velocities of the antennas, we can apply a particle filter to generate an estimate of the location of the transmitting antenna with respect to the receiving one. The theory is supported by simulations and later verified through a series of experiments.« less