skip to main content

Title: Jointly Optimizing Placement and Inference for Beacon-based Localization
The ability of robots to estimate their location is crucial for a wide variety of autonomous operations. In settings where GPS is unavailable, measurements of transmissions from fixed beacons provide an effective means of estimating a robot’s location as it navigates. The accuracy of such a beacon-based localization system depends both on how beacons are distributed in the environment, and how the robot’s location is inferred based on noisy and potentially ambiguous measurements. We propose an approach for making these design decisions automatically and without expert supervision, by explicitly searching for the placement and inference strategies that, together, are optimal for a given environment. Since this search is computationally expensive, our approach encodes beacon placement as a differential neural layer that interfaces with a neural network for inference. This formulation allows us to employ standard techniques for training neural networks to carry out the joint optimization. We evaluate this approach on a variety of environments and settings, and find that it is able to discover designs that enable high localization accuracy.
Authors:
; ; ;
Award ID(s):
1638072
Publication Date:
NSF-PAR ID:
10042891
Journal Name:
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability of robots to estimate their location is crucial for a wide variety of autonomous operations. In settings where GPS is unavailable, measurements of transmissions from fixed beacons provide an effective means of estimating a robot’s location as it navigates. The accuracy of such a beacon-based localization system depends both on how beacons are distributed in the environment, and how the robot’s location is inferred based on noisy and potentially ambiguous measurements. We propose an approach for making these design decisions automatically and without expert supervision, by explicitly searching for the placement and inference strategies that, together, are optimal for a given environment. Since this search is computationally expensive, our approach encodes beacon placement as a differential neural layer that interfaces with a neural network for inference. This formulation allows us to employ standard techniques for training neural networks to carry out the joint optimization. We evaluate this approach on a variety of environments and settings, and find that it is able to discover designs that enable high localization accuracy.
  2. Indoor localization systems typically determine a position using either ranging measurements, inertial sensors, environmental-specific signatures or some combination of all of these methods. Given a floor plan, inertial and signature-based systems can converge on accurate locations by slowly pruning away inconsistent states as a user walks through the space. In contrast, range-based systems are capable of instantly acquiring locations, but they rely on densely deployed beacons and suffer from inaccurate range measurements given non-line-of-sight (NLOS) signals. In order to get the best of both worlds, we present an approach that systematically exploits the geometry information derived from building floor plans to directly improve location acquisition in range-based systems. Our solving approach can disambiguate multiple feasible locations taking into account a mix of LOS and NLOS hypotheses to accurately localize with significantly fewer beacons. We demonstrate our geometry-aware solving approach using a new ultrasonic beacon platform that is able to perform direct time-of-flight ranges on commodity smartphones. The platform uses Bluetooth Low Energy (BLE) for time synchronization and ultrasound for measuring propagation distance. We evaluate our system's accuracy with multiple deployments in a university campus and show that our approach shifts the 80% accuracy point from 4 -- 8m to 1mmore »as compared to solvers that do not use the floor plan information. We are able to detect and remove NLOS signals with 91.5% accuracy.« less
  3. Rehabilitation is a crucial process for patients suffering from motor disorders. The current practice is performing rehabilitation exercises under clinical expert supervision. New approaches are needed to allow patients to perform prescribed exercises at their homes and alleviate commuting requirements, expert shortages, and healthcare costs. Human joint estimation is a substantial component of these programs since it offers valuable visualization and feedback based on body movements. Camera-based systems have been popular for capturing joint motion. However, they have high-cost, raise serious privacy concerns, and require strict lighting and placement settings. We propose a millimeter-wave (mmWave)-based assistive rehabilitation system (MARS) for motor disorders to address these challenges. MARS provides a low-cost solution with a competitive object localization and detection accuracy. It first maps the 5D time-series point cloud from mmWave to a lower dimension. Then, it uses a convolution neural network (CNN) to estimate the accurate location of human joints. MARS can reconstruct 19 human joints and their skeleton from the point cloud generated by mmWave radar. We evaluate MARS using ten specific rehabilitation movements performed by four human subjects involving all body parts and obtain an average mean absolute error of 5.87 cm for all joint positions. To the bestmore »of our knowledge, this is the first rehabilitation movements dataset using mmWave point cloud. MARS is evaluated on the Nvidia Jetson Xavier-NX board. Model inference takes only 64 s and consumes 442 J energy. These results demonstrate the practicality of MARS on low-power edge devices.« less
  4. Abstract

    Localization of mobile robots is essential for navigation and data collection. This work presents an optical localization scheme for mobile robots during the robot’s continuous movement, despite that only one bearing angle can be captured at a time. In particular, this paper significantly improves upon our previous works where the robot has to pause its movement in order to acquire the two bearing angle measurements needed for position determination. The latter restriction forces the robot to work in a stop-and-go mode, which constrains the robot’s mobilitty. The proposed scheme exploits the velocity prediction from Kalman filtering, to properly correlate two consecutive measurements of bearing angles with respect to the base nodes (beacons) to produce location measurement. The proposed solution is evaluated in simulation and its advantage is demonstrated through the comparison with the traditional approach where the two consecutive angle measurements are directly used to compute the location.

  5. Optical communication is of increasing interest as an alternative to acoustic communication for robots operated in underwater environments. Our previous work presented a method for LED-based Simultaneous Localization and Communication (SLAC) that uses the bearing angles, obtained in establishing line-of-sight (LOS) between two beacon nodes and a mobile robot for communication, for geometric triangulation and thus localization of the robot. In this paper, we consider the problem of optical localization in the setting of a network of beacon nodes, and specifically, how to fuse the measurements from multiple pairs of beacon nodes to obtain the target location. A sensitivity metric, which represents how sensitive the target estimate is with respect to the bearing measurement errors, is used for selecting the desired pair of beacon nodes. The proposed solution is evaluated with extensive simulation and preliminary experimentation, in a setting of three beacon nodes and one mobile node. Comparison with an average-based fusion approach and an approach using a fixed pair of beacon nodes demonstrates the efficacy of the proposed approach.