skip to main content


Title: Set-Based State Estimation of Mobile Robots from Coarse Range Measurements
This paper proposes a localization algorithm for an autonomous mobile robot equipped with binary proximity sensors that only indicate when the robot is within a fixed distance from beacons installed at known positions. Our algorithm leverages an ellipsoidal Set Membership State Estimator (SMSE) that maintains an ellipsoidal bound of the position and velocity states of the robot. The estimate incorporates knowledge of the robot's dynamics, bounds on environmental disturbances, and the binary sensor readings. The localization algorithm is motivated by an underwater scenario where accurate range or bearing measurements are often missing. We demonstrate our approach on an experimental platform using an autonomous blimp.  more » « less
Award ID(s):
1849228 1828678 1934836
NSF-PAR ID:
10212084
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of 4th IEEE Conference on Control Technology and Applications
Page Range / eLocation ID:
404 to 409
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper proposes a nudged particle filter for estimating the pose of a camera mounted on flying robots collecting a video sequence. The nudged particle filter leverages two image-to-pose and pose-to-image neural networks trained in an auto-encoder fashion with a dataset of pose-labeled images. Given an image, the retrieved camera pose using the image-to-pose network serves as a special particle to nudge the set of particles generated from the particle filter while the pose-to-image network serves to compute the likelihoods of each particle. We demonstrate that such a nudging scheme effectively mitigates low likelihood samplings during the particle propagation step. Ellipsoidal confidence tubes are constructed from the set of particles to provide a computationally efficient bound on localization error. When an ellipsoidal tube self-intersects, the probability volume of the intersection can be significantly shrunken using a novel Dempster–Shafer probability mass assignment algorithm. Starting from the intersection, a loop closure procedure is developed to move backward in time to shrink the volumes of the entire ellipsoidal tube. Experimental results using the Georgia Tech Miniature Autonomous Blimp platform are provided to demonstrate the feasibility and effectiveness of the proposed algorithms in providing localization and pose estimation based on monocular vision. 
    more » « less
  2. null (Ed.)
    The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method for estimating the pose of an indoor robot that successfully enables navigation to specified waypoints. 
    more » « less
  3. Abstract

    In this paper, we address the problem of autonomous multi-robot mapping, exploration and navigation in unknown, GPS-denied indoor or urban environments using a team of robots equipped with directional sensors with limited sensing capabilities and limited computational resources. The robots have no a priori knowledge of the environment and need to rapidly explore and construct a map in a distributed manner using existing landmarks, the presence of which can be detected using onboard senors, although little to no metric information (distance or bearing to the landmarks) is available. In order to correctly and effectively achieve this, the presence of a necessary density/distribution of landmarks is ensured by design of the urban/indoor environment. We thus address this problem in two phases: (1) During the design/construction of the urban/indoor environment we can ensure that sufficient landmarks are placed within the environment. To that end we develop afiltration-based approach for designing strategic placement of landmarks in an environment. (2) We develop a distributed algorithm which a team of robots, with no a priori knowledge of the environment, can use to explore such an environment, construct a topological map requiring no metric/distance information, and use that map to navigate within the environment. This is achieved using a topological representation of the environment (called aLandmark Complex), instead of constructing a complete metric/pixel map. The representation is built by the robot as well as used by them for navigation through a balanced strategy involving exploration and exploitation. We use tools from homology theory for identifying “holes” in the coverage/exploration of the unknown environment and hence guide the robots towards achieving a complete exploration and mapping of the environment. Our simulation results demonstrate the effectiveness of the proposed metric-free topological (simplicial complex) representation in achieving exploration, localization and navigation within the environment.

     
    more » « less
  4. null (Ed.)
    This work presents novel techniques for tightly integrated online information fusion and planning in human-autonomy teams operating in partially known environments. Motivated by dynamic target search problems, we present a new map-based sketch interface for online soft-hard data fusion. This interface lets human collaborators efficiently update map information and continuously build their own highly flexible ad hoc dictionaries for making language-based semantic observations, which can be actively exploited by autonomous agents in optimal search and information gathering problems. We formally link these capabilities to POMDP algorithms for optimal planning under uncertainty, and develop a new Dynamically Observable Monte Carlo planning (DOMCP) algorithm as an efficient means for updating online sampling-based planning policies for POMDPs with non-static observation models. DOMCP is validated on a small scale robot localization problem, and then demonstrated with our new user interface on a simulated dynamic target search scenario in a partially known outdoor environment. 
    more » « less
  5. null (Ed.)

    This paper presents two methods, tegrastats GUI version jtop and Nsight Systems, to profile NVIDIA Jetson embedded GPU devices on a model race car which is a great platform for prototyping and field testing autonomous driving algorithms. The two profilers analyze the power consumption, CPU/GPU utilization, and the run time of CUDA C threads of Jetson TX2 in five different working modes. The performance differences among the five modes are demonstrated using three example programs: vector add in C and CUDA C, a simple ROS (Robot Operating System) package of the wall follow algorithm in Python, and a complex ROS package of the particle filter algorithm for SLAM (Simultaneous Localization and Mapping). The results show that the tools are effective means for selecting operating mode of the embedded GPU devices.

     
    more » « less