skip to main content


Title: Relative multiplicative extended Kalman filter for observable GPS-denied navigation
This work presents a multiplicative extended Kalman filter (MEKF) for estimating the relative state of a multirotor vehicle operating in a GPS-denied environment. The filter fuses data from an inertial measurement unit and altimeter with relative-pose updates from a keyframe-based visual odometry or laser scan-matching algorithm. Because the global position and heading states of the vehicle are unobservable in the absence of global measurements such as GPS, the filter in this article estimates the state with respect to a local frame that is colocated with the odometry keyframe. As a result, the odometry update provides nearly direct measurements of the relative vehicle pose, making those states observable. Recent publications have rigorously documented the theoretical advantages of such an observable parameterization, including improved consistency, accuracy, and system robustness, and have demonstrated the effectiveness of such an approach during prolonged multirotor flight tests. This article complements this prior work by providing a complete, self-contained, tutorial derivation of the relative MEKF, which has been thoroughly motivated but only briefly described to date. This article presents several improvements and extensions to the filter while clearly defining all quaternion conventions and properties used, including several new useful properties relating to error quaternions and their Euler-angle decomposition. Finally, this article derives the filter both for traditional dynamics defined with respect to an inertial frame, and for robocentric dynamics defined with respect to the vehicle’s body frame, and provides insights into the subtle differences that arise between the two formulations.  more » « less
Award ID(s):
1650547
NSF-PAR ID:
10316379
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
The International Journal of Robotics Research
Volume:
39
Issue:
9
ISSN:
0278-3649
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Unlike many current navigation approaches for micro air vehicles, the relative navigation (RN) framework presented in this paper ensures that the filter state remains observable in GPS-denied environments by working with respect to a local reference frame. By subtly restructuring the problem, RN ensures that the filter uncertainty remains bounded, consistent, and normally-distributed, and insulates flight-critical estimation and control processes from large global updates. This paper thoroughly outlines the RN framework and demonstrates its practicality with several long flight tests in unknown GPS-denied and GPS-degraded environments. The relative front end is shown to produce low-drift estimates and smooth, stable control while leveraging off-the-shelf algorithms. The system runs in real time with onboard processing, fuses a variety of vision sensors, and works indoors and outdoors without requiring special tuning for particular sensors or environments. RN is shown to produce globally-consistent, metric, and localized maps by incorporating loop closures and intermittent GPS measurements 
    more » « less
  2. IEEE (Ed.)
    This paper addresses the robustness problem of visual-inertial state estimation for underwater operations. Underwater robots operating in a challenging environment are required to know their pose at all times. All vision-based localization schemes are prone to failure due to poor visibility conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot's kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times. Health-monitoring tracks the VIO process ensuring timely switches between the two estimators. Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over coral reefs and a shipwreck. 
    more » « less
  3. Deep inertial sequence learning has shown promising odometric resolution over model-based approaches for trajectory estimation in GPS-denied environments. However, existing neural inertial dead-reckoning frameworks are not suitable for real-time deployment on ultra-resource-constrained (URC) devices due to substantial memory, power, and compute bounds. Current deep inertial odometry techniques also suffer from gravity pollution, high-frequency inertial disturbances, varying sensor orientation, heading rate singularity, and failure in altitude estimation. In this paper, we introduce TinyOdom, a framework for training and deploying neural inertial models on URC hardware. TinyOdom exploits hardware and quantization-aware Bayesian neural architecture search (NAS) and a temporal convolutional network (TCN) backbone to train lightweight models targetted towards URC devices. In addition, we propose a magnetometer, physics, and velocity-centric sequence learning formulation robust to preceding inertial perturbations. We also expand 2D sequence learning to 3D using a model-free barometric g-h filter robust to inertial and environmental variations. We evaluate TinyOdom for a wide spectrum of inertial odometry applications and target hardware against competing methods. Specifically, we consider four applications: pedestrian, animal, aerial, and underwater vehicle dead-reckoning. Across different applications, TinyOdom reduces the size of neural inertial models by 31× to 134× with 2.5m to 12m error in 60 seconds, enabling the direct deployment of models on URC devices while still maintaining or exceeding the localization resolution over the state-of-the-art. The proposed barometric filter tracks altitude within ±0.1m and is robust to inertial disturbances and ambient dynamics. Finally, our ablation study shows that the introduced magnetometer, physics, and velocity-centric sequence learning formulation significantly improve localization performance even with notably lightweight models. 
    more » « less
  4. Vision-based state estimation is challenging in underwater environments due to color attenuation, low visibility and floating particulates. All visual-inertial estimators are prone to failure due to degradation in image quality. However, underwater robots are required to keep track of their pose during field deployments. We propose robust estimator fusing the robot's dynamic and kinematic model with proprioceptive sensors to propagate the pose whenever visual-inertial odometry (VIO) fails. To detect the VIO failures, health tracking is used, which enables switching between pose estimates from VIO and a kinematic estimator. Loop closure implemented on weighted posegraph for global trajectory optimization. Experimental results from an Aqua2 Autonomous Underwater Vehicle field deployments demonstrates the robustness of our approach over different underwater environments such as over shipwrecks and coral reefs. The proposed hybrid approach is robust to VIO failures producing consistent trajectories even in harsh conditions. 
    more » « less
  5. Autonomous driving in dense urban areas presents an especially difficult task. First, globally localizing information, such as GPS signal, often proves to be unreliable in such areas due to signal shadowing and multipath errors. Second, the high‐definition environmental maps with sufficient information for autonomous navigation require a large amount of data to be collected from these areas, significant postprocessing of this data to generate the map, and then continual maintenance of the map to account for changes in the environment. This paper addresses the issue of autonomous driving in urban environments by investigating algorithms and an architecture to enable fully functional autonomous driving with little to no reliance on map‐based measurements or GPS signals. An extended Kalman filter with odometry, compass, and sparse landmark measurements as inputs is used to provide localization. Real‐time detection and estimation of key roadway features are used to create an understanding of the surrounding static scene. Navigation is accomplished by a compass‐based navigation control law. Experimental scene understanding results are obtained using computer vision and estimation techniques and demonstrate the ability to probabilistically infer key features of an intersection in real time. Key results from Monte Carlo studies demonstrate the proposed localization and navigation methods. These tests provide success rates of urban navigation under different environmental conditions, such as landmark density, and show that the vehicle can navigate to a goal nearly 10 km away without any external pose update at all. Field tests validate these simulated results and demonstrate that, for given test conditions, an expected range can be determined for a given success rate.

     
    more » « less