This paper proposes a nudged particle filter for estimating the pose of a camera mounted on flying robots collecting a video sequence. The nudged particle filter leverages two image-to-pose and pose-to-image neural networks trained in an auto-encoder fashion with a dataset of pose-labeled images. Given an image, the retrieved camera pose using the image-to-pose network serves as a special particle to nudge the set of particles generated from the particle filter while the pose-to-image network serves to compute the likelihoods of each particle. We demonstrate that such a nudging scheme effectively mitigates low likelihood samplings during the particle propagation step. Ellipsoidal confidence tubes are constructed from the set of particles to provide a computationally efficient bound on localization error. When an ellipsoidal tube self-intersects, the probability volume of the intersection can be significantly shrunken using a novel Dempster–Shafer probability mass assignment algorithm. Starting from the intersection, a loop closure procedure is developed to move backward in time to shrink the volumes of the entire ellipsoidal tube. Experimental results using the Georgia Tech Miniature Autonomous Blimp platform are provided to demonstrate the feasibility and effectiveness of the proposed algorithms in providing localization and pose estimation based on monocular vision.
more »
« less
Set-Based State Estimation of Mobile Robots from Coarse Range Measurements
This paper proposes a localization algorithm for an autonomous mobile robot equipped with binary proximity sensors that only indicate when the robot is within a fixed distance from beacons installed at known positions. Our algorithm leverages an ellipsoidal Set Membership State Estimator (SMSE) that maintains an ellipsoidal bound of the position and velocity states of the robot. The estimate incorporates knowledge of the robot's dynamics, bounds on environmental disturbances, and the binary sensor readings. The localization algorithm is motivated by an underwater scenario where accurate range or bearing measurements are often missing. We demonstrate our approach on an experimental platform using an autonomous blimp.
more »
« less
- PAR ID:
- 10212084
- Date Published:
- Journal Name:
- Proceedings of 4th IEEE Conference on Control Technology and Applications
- Page Range / eLocation ID:
- 404 to 409
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)The Georgia Tech Miniature Autonomous Blimp (GT-MAB) needs localization algorithms to navigate to way-points in an indoor environment without leveraging an external motion capture system. Indoor aerial robots often require a motion capture system for localization or employ simultaneous localization and mapping (SLAM) algorithms for navigation. The proposed strategy for GT-MAB localization can be accomplished using lightweight sensors on a weight-constrained platform like the GT-MAB. We train an end-to-end convolutional neural network (CNN) that predicts the horizontal position and heading of the GT-MAB using video collected by an onboard monocular RGB camera. On the other hand, the height of the GT-MAB is estimated from measurements through a time-of-flight (ToF) single-beam laser sensor. The monocular camera and the single-beam laser sensor are sufficient for the localization algorithm to localize the GT-MAB in real time, achieving the averaged 3D positioning errors to be less than 20 cm, and the averaged heading errors to be less than 3 degrees. With the accuracy of our proposed localization method, we are able to use simple proportional-integral-derivative controllers to control the GT-MAB for waypoint navigation. Experimental results on the waypoint following are provided, which demonstrates the use of a CNN as the primary localization method for estimating the pose of an indoor robot that successfully enables navigation to specified waypoints.more » « less
-
Monitoring localization safety will be necessary to certify the performance of robots that operate in life-critical applications, such as autonomous passenger vehicles or delivery drones because many current localization safety methods do not account for the risk of undetected sensor faults. One type of fault, misassociation, occurs when a feature extracted from a mapped landmark is associated to a non-corresponding landmark and is a common source of error in feature-based navigation applications. This paper accounts for the probability of misassociation when quantifying landmark-based mobile robot localization safety for fixed-lag smoothing estimators. We derive a mobile robot localization safety bound and evaluate it using simulations and experimental data in an urban environment. Results show that localization safety suffers when landmark density is relatively low such that there are not enough landmarks to adequately localize and when landmark density is relatively high because of the high risk of feature misassociation.more » « less
-
null (Ed.)This paper presents two methods, tegrastats GUI version jtop and Nsight Systems, to profile NVIDIA Jetson embedded GPU devices on a model race car which is a great platform for prototyping and field testing autonomous driving algorithms. The two profilers analyze the power consumption, CPU/GPU utilization, and the run time of CUDA C threads of Jetson TX2 in five different working modes. The performance differences among the five modes are demonstrated using three example programs: vector add in C and CUDA C, a simple ROS (Robot Operating System) package of the wall follow algorithm in Python, and a complex ROS package of the particle filter algorithm for SLAM (Simultaneous Localization and Mapping). The results show that the tools are effective means for selecting operating mode of the embedded GPU devices.more » « less
-
Autonomous underwater robots working with teams of human divers may need to distinguish between different divers, e.g., to recognize a lead diver or to follow a specific team member. This paper describes a technique that enables autonomous underwater robots to track divers in real time as well as to reidentify them. The approach is an extension of Simple Online Realtime Tracking (SORT) with an appearance metric (deep SORT). Initial diver detection is performed with a custom CNN designed for realtime diver detection, and appearance features are subsequently extracted for each detected diver. Next, realtime tracking by-detection is performed with an extension of the deep SORT algorithm. We evaluate this technique on a series of videos of divers performing human-robot collaborative tasks and show that our methods result in more divers being accurately identified during tracking. We also discuss the practical considerations of applying multi-person tracking to on-board autonomous robot operations, and we consider how failure cases can be addressed during on-board tracking.more » « less
An official website of the United States government

