skip to main content


Title: Relative multiplicative extended Kalman filter for observable GPS-denied navigation
This work presents a multiplicative extended Kalman filter (MEKF) for estimating the relative state of a multirotor vehicle operating in a GPS-denied environment. The filter fuses data from an inertial measurement unit and altimeter with relative-pose updates from a keyframe-based visual odometry or laser scan-matching algorithm. Because the global position and heading states of the vehicle are unobservable in the absence of global measurements such as GPS, the filter in this article estimates the state with respect to a local frame that is colocated with the odometry keyframe. As a result, the odometry update provides nearly direct measurements of the relative vehicle pose, making those states observable. Recent publications have rigorously documented the theoretical advantages of such an observable parameterization, including improved consistency, accuracy, and system robustness, and have demonstrated the effectiveness of such an approach during prolonged multirotor flight tests. This article complements this prior work by providing a complete, self-contained, tutorial derivation of the relative MEKF, which has been thoroughly motivated but only briefly described to date. This article presents several improvements and extensions to the filter while clearly defining all quaternion conventions and properties used, including several new useful properties relating to error quaternions and their Euler-angle decomposition. Finally, this article derives the filter both for traditional dynamics defined with respect to an inertial frame, and for robocentric dynamics defined with respect to the vehicle’s body frame, and provides insights into the subtle differences that arise between the two formulations.  more » « less
Award ID(s):
1650547
NSF-PAR ID:
10316379
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
The International Journal of Robotics Research
Volume:
39
Issue:
9
ISSN:
0278-3649
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Legged robots require knowledge of pose and velocity in order to maintain stability and execute walking paths. Current solutions either rely on vision data, which is susceptible to environmental and lighting conditions, or fusion of kinematic and contact data with measurements from an inertial measurement unit (IMU). In this work, we develop a contact-aided invariant extended Kalman filter (InEKF) using the theory of Lie groups and invariant observer design. This filter combines contact-inertial dynamics with forward kinematic corrections to estimate pose and velocity along with all current contact points. We show that the error dynamics follows a log-linear autonomous differential equation with several important consequences: (a) the observable state variables can be rendered convergent with a domain of attraction that is independent of the system’s trajectory; (b) unlike the standard EKF, neither the linearized error dynamics nor the linearized observation model depend on the current state estimate, which (c) leads to improved convergence properties and (d) a local observability matrix that is consistent with the underlying nonlinear system. Furthermore, we demonstrate how to include IMU biases, add/remove contacts, and formulate both world-centric and robo-centric versions. We compare the convergence of the proposed InEKF with the commonly used quaternion-based extended Kalman filter (EKF) through both simulations and experiments on a Cassie-series bipedal robot. Filter accuracy is analyzed using motion capture, while a LiDAR mapping experiment provides a practical use case. Overall, the developed contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in system. 
    more » « less
  2. Unlike many current navigation approaches for micro air vehicles, the relative navigation (RN) framework presented in this paper ensures that the filter state remains observable in GPS-denied environments by working with respect to a local reference frame. By subtly restructuring the problem, RN ensures that the filter uncertainty remains bounded, consistent, and normally-distributed, and insulates flight-critical estimation and control processes from large global updates. This paper thoroughly outlines the RN framework and demonstrates its practicality with several long flight tests in unknown GPS-denied and GPS-degraded environments. The relative front end is shown to produce low-drift estimates and smooth, stable control while leveraging off-the-shelf algorithms. The system runs in real time with onboard processing, fuses a variety of vision sensors, and works indoors and outdoors without requiring special tuning for particular sensors or environments. RN is shown to produce globally-consistent, metric, and localized maps by incorporating loop closures and intermittent GPS measurements 
    more » « less
  3. Deep inertial sequence learning has shown promising odometric resolution over model-based approaches for trajectory estimation in GPS-denied environments. However, existing neural inertial dead-reckoning frameworks are not suitable for real-time deployment on ultra-resource-constrained (URC) devices due to substantial memory, power, and compute bounds. Current deep inertial odometry techniques also suffer from gravity pollution, high-frequency inertial disturbances, varying sensor orientation, heading rate singularity, and failure in altitude estimation. In this paper, we introduce TinyOdom, a framework for training and deploying neural inertial models on URC hardware. TinyOdom exploits hardware and quantization-aware Bayesian neural architecture search (NAS) and a temporal convolutional network (TCN) backbone to train lightweight models targetted towards URC devices. In addition, we propose a magnetometer, physics, and velocity-centric sequence learning formulation robust to preceding inertial perturbations. We also expand 2D sequence learning to 3D using a model-free barometric g-h filter robust to inertial and environmental variations. We evaluate TinyOdom for a wide spectrum of inertial odometry applications and target hardware against competing methods. Specifically, we consider four applications: pedestrian, animal, aerial, and underwater vehicle dead-reckoning. Across different applications, TinyOdom reduces the size of neural inertial models by 31× to 134× with 2.5m to 12m error in 60 seconds, enabling the direct deployment of models on URC devices while still maintaining or exceeding the localization resolution over the state-of-the-art. The proposed barometric filter tracks altitude within ±0.1m and is robust to inertial disturbances and ambient dynamics. Finally, our ablation study shows that the introduced magnetometer, physics, and velocity-centric sequence learning formulation significantly improve localization performance even with notably lightweight models. 
    more » « less
  4. Autonomous driving in dense urban areas presents an especially difficult task. First, globally localizing information, such as GPS signal, often proves to be unreliable in such areas due to signal shadowing and multipath errors. Second, the high‐definition environmental maps with sufficient information for autonomous navigation require a large amount of data to be collected from these areas, significant postprocessing of this data to generate the map, and then continual maintenance of the map to account for changes in the environment. This paper addresses the issue of autonomous driving in urban environments by investigating algorithms and an architecture to enable fully functional autonomous driving with little to no reliance on map‐based measurements or GPS signals. An extended Kalman filter with odometry, compass, and sparse landmark measurements as inputs is used to provide localization. Real‐time detection and estimation of key roadway features are used to create an understanding of the surrounding static scene. Navigation is accomplished by a compass‐based navigation control law. Experimental scene understanding results are obtained using computer vision and estimation techniques and demonstrate the ability to probabilistically infer key features of an intersection in real time. Key results from Monte Carlo studies demonstrate the proposed localization and navigation methods. These tests provide success rates of urban navigation under different environmental conditions, such as landmark density, and show that the vehicle can navigate to a goal nearly 10 km away without any external pose update at all. Field tests validate these simulated results and demonstrate that, for given test conditions, an expected range can be determined for a given success rate.

     
    more » « less
  5. This paper presents SVIn2, a novel tightly-coupled keyframe-based Simultaneous Localization and Mapping (SLAM) system, which fuses Scanning Profiling Sonar, Visual, Inertial, and water-pressure information in a non-linear optimization framework for small and large scale challenging underwater environments. The developed real-time system features robust initialization, loop-closing, and relocalization capabilities, which make the system reliable in the presence of haze, blurriness, low light, and lighting variations, typically observed in underwater scenarios. Over the last decade, Visual-Inertial Odometry and SLAM systems have shown excellent performance for mobile robots in indoor and outdoor environments, but often fail underwater due to the inherent difficulties in such environments. Our approach combats the weaknesses of previous approaches by utilizing additional sensors and exploiting their complementary characteristics. In particular, we use (1) acoustic range information for improved reconstruction and localization, thanks to the reliable distance measurement; (2) depth information from water-pressure sensor for robust initialization, refining the scale, and assisting to limit the drift in the tightly-coupled integration. The developed software—made open source—has been successfully used to test and validate the proposed system in both benchmark datasets and numerous real world underwater scenarios, including datasets collected with a custom-made underwater sensor suite and an autonomous underwater vehicle Aqua2. SVIn2 demonstrated outstanding performance in terms of accuracy and robustness on those datasets and enabled other robotic tasks, for example, planning for underwater robots in presence of obstacles.

     
    more » « less