This paper introduces a new invariant extended Kalman filter design that produces real-time state estimates and rapid error convergence for the estimation of the human body movement even in the presence of sensor misalignment and initial state estimation errors. The filter fuses the data returned by an inertial measurement unit (IMU) attached to the body (e.g., pelvis or chest) and a virtual measurement of zero stance-foot velocity (i.e., leg odometry). The key novelty of the proposed filter lies in that its process model meets the group affine property while the filter explicitly addresses the IMU placement error by formulating its stochastic process model as Brownian motions and incorporating the error in the leg odometry. Although the measurement model is imperfect (i.e., it does not possess an invariant observation form) and thus its linearization relies on the state estimate, experimental results demonstrate fast convergence of the proposed filter (within 0.2 seconds) during squatting motions even under significant IMU placement inaccuracy and initial estimation errors.
more »
« less
Legged Robot State Estimation within Non-inertial Environments
This work investigates the robot state estimation problem within a non-inertial environment. The proposed state estimation approach relaxes the common assumption of static ground in the system modeling. The process and measurement models explicitly treat the movement of the non-inertial environments without requiring knowledge of its motion in the inertial frame or relying on GPS or sensing environmental landmarks. Further, the proposed state estimator is formulated as an invariant extended Kalman filter (InEKF) [1] with the deterministic part of its process model obeying the groupaffine property, leading to log-linear error dynamics. The observability analysis confirms the robot’s pose (i.e., position and orientation) and velocity relative to the non-inertial environment are observable under the proposed InEKF.
more »
« less
- Award ID(s):
- 2423239
- PAR ID:
- 10581816
- Publisher / Repository:
- IEEE
- Date Published:
- Format(s):
- Medium: X
- Institution:
- 2024 IEEE International Conference on Advanced Intelligent Mechatronics
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
IEEE (Ed.)This paper addresses the robustness problem of visual-inertial state estimation for underwater operations. Underwater robots operating in a challenging environment are required to know their pose at all times. All vision-based localization schemes are prone to failure due to poor visibility conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot's kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times. Health-monitoring tracks the VIO process ensuring timely switches between the two estimators. Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over coral reefs and a shipwreck.more » « less
-
This paper proposes a new evacuation strategy for mobile agents fleeing an indoor environment with a contaminated spatial field. Since the effects of a contaminated field on the mobile agents are cumulative, then a policy ensuring that each agent reaches safety while minimizing the accumulated effects of the spatial field is warranted. While each agent is fleeing towards safety, it is also collecting information on the spatial field along its own escape path. This process information, provided by each evacuating mobile agent, is harnessed for the state reconstruction of the spatial process. Thus, an integrated state estimation scheme with the simultaneous sequential agent evacuation is proposed. Numerical results are included to highlight the proposed evacuation policy.more » « less
-
Deep inertial sequence learning has shown promising odometric resolution over model-based approaches for trajectory estimation in GPS-denied environments. However, existing neural inertial dead-reckoning frameworks are not suitable for real-time deployment on ultra-resource-constrained (URC) devices due to substantial memory, power, and compute bounds. Current deep inertial odometry techniques also suffer from gravity pollution, high-frequency inertial disturbances, varying sensor orientation, heading rate singularity, and failure in altitude estimation. In this paper, we introduce TinyOdom, a framework for training and deploying neural inertial models on URC hardware. TinyOdom exploits hardware and quantization-aware Bayesian neural architecture search (NAS) and a temporal convolutional network (TCN) backbone to train lightweight models targetted towards URC devices. In addition, we propose a magnetometer, physics, and velocity-centric sequence learning formulation robust to preceding inertial perturbations. We also expand 2D sequence learning to 3D using a model-free barometric g-h filter robust to inertial and environmental variations. We evaluate TinyOdom for a wide spectrum of inertial odometry applications and target hardware against competing methods. Specifically, we consider four applications: pedestrian, animal, aerial, and underwater vehicle dead-reckoning. Across different applications, TinyOdom reduces the size of neural inertial models by 31× to 134× with 2.5m to 12m error in 60 seconds, enabling the direct deployment of models on URC devices while still maintaining or exceeding the localization resolution over the state-of-the-art. The proposed barometric filter tracks altitude within ±0.1m and is robust to inertial disturbances and ambient dynamics. Finally, our ablation study shows that the introduced magnetometer, physics, and velocity-centric sequence learning formulation significantly improve localization performance even with notably lightweight models.more » « less
-
This paper presents a Multiplicative Extended Kalman Filter (MEKF) framework using a state-of-the-art velocimeter Light Detection and Ranging (LIDAR) sensor for Terrain Relative Navigation (TRN) applications. The newly developed velocimeter LIDAR is capable of providing simultaneous position, Doppler velocity, and reflectivity measurements for every point in the point cloud. This information, along with pseudo-measurements from point cloud registration techniques, a novel bulk velocity batch state estimation process and inertial measurement data, is fused within a traditional Kalman filter architecture. Results from extensive emulation robotics experiments performed at Texas A&M’s Land, Air, and Space Robotics (LASR) laboratory and Monte Carlo simulations are presented to evaluate the efficacy of the proposed algorithms.more » « less
An official website of the United States government

