skip to main content


Title: Continuous-Time Spline Visual-Inertial Odometry
We propose a continuous-time spline-based formulation for visual-inertial odometry (VIO). Specifically, we model the poses as a cubic spline, whose temporal derivatives are used to synthesize linear acceleration and angular velocity, which are compared to the measurements from the inertial measurement unit (IMU) for optimal state estimation. The spline boundary conditions create constraints between the camera and the IMU, with which we formulate VIO as a constrained nonlinear optimization problem. Continuous-time pose representation makes it possible to address many VIO challenges, e.g., rolling shutter distortion and sensors that may lack synchronization. We conduct experiments on two publicly available datasets that demonstrate the state-of-the-art accuracy and real-time computational efficiency of our method.  more » « less
Award ID(s):
1637875
NSF-PAR ID:
10387638
Author(s) / Creator(s):
Date Published:
Journal Name:
Proiceedings of the 2022 International Conference on Robotics and Automation (ICRA)
Page Range / eLocation ID:
9492-9498
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Inertial measurement units (IMUs) are an alternative to traditional optical motion capture systems that allow for data collection outside the lab and continuous monitoring for daily activities. In this study, a non-linear least squares optimization was used to convert IMU measurements into joint kinematics. The optimization calculates joint angles simultaneously over all time frames by optimizing B-spline nodes, without integrating any IMU measurements. This approach enables an accurate solution that works well with noisy experimental IMU data since integration errors are eliminated. 
    more » « less
  2. Vision-based state estimation is challenging in underwater environments due to color attenuation, low visibility and floating particulates. All visual-inertial estimators are prone to failure due to degradation in image quality. However, underwater robots are required to keep track of their pose during field deployments. We propose robust estimator fusing the robot's dynamic and kinematic model with proprioceptive sensors to propagate the pose whenever visual-inertial odometry (VIO) fails. To detect the VIO failures, health tracking is used, which enables switching between pose estimates from VIO and a kinematic estimator. Loop closure implemented on weighted posegraph for global trajectory optimization. Experimental results from an Aqua2 Autonomous Underwater Vehicle field deployments demonstrates the robustness of our approach over different underwater environments such as over shipwrecks and coral reefs. The proposed hybrid approach is robust to VIO failures producing consistent trajectories even in harsh conditions. 
    more » « less
  3. IEEE (Ed.)
    This paper addresses the robustness problem of visual-inertial state estimation for underwater operations. Underwater robots operating in a challenging environment are required to know their pose at all times. All vision-based localization schemes are prone to failure due to poor visibility conditions, color loss, and lack of features. The proposed approach utilizes a model of the robot's kinematics together with proprioceptive sensors to maintain the pose estimate during visual-inertial odometry (VIO) failures. Furthermore, the trajectories from successful VIO and the ones from the model-driven odometry are integrated in a coherent set that maintains a consistent pose at all times. Health-monitoring tracks the VIO process ensuring timely switches between the two estimators. Finally, loop closure is implemented on the overall trajectory. The resulting framework is a robust estimator switching between model-based and visual-inertial odometry (SM/VIO). Experimental results from numerous deployments of the Aqua2 vehicle demonstrate the robustness of our approach over coral reefs and a shipwreck. 
    more » « less
  4. Legged robots require knowledge of pose and velocity in order to maintain stability and execute walking paths. Current solutions either rely on vision data, which is susceptible to environmental and lighting conditions, or fusion of kinematic and contact data with measurements from an inertial measurement unit (IMU). In this work, we develop a contact-aided invariant extended Kalman filter (InEKF) using the theory of Lie groups and invariant observer design. This filter combines contact-inertial dynamics with forward kinematic corrections to estimate pose and velocity along with all current contact points. We show that the error dynamics follows a log-linear autonomous differential equation with several important consequences: (a) the observable state variables can be rendered convergent with a domain of attraction that is independent of the system’s trajectory; (b) unlike the standard EKF, neither the linearized error dynamics nor the linearized observation model depend on the current state estimate, which (c) leads to improved convergence properties and (d) a local observability matrix that is consistent with the underlying nonlinear system. Furthermore, we demonstrate how to include IMU biases, add/remove contacts, and formulate both world-centric and robo-centric versions. We compare the convergence of the proposed InEKF with the commonly used quaternion-based extended Kalman filter (EKF) through both simulations and experiments on a Cassie-series bipedal robot. Filter accuracy is analyzed using motion capture, while a LiDAR mapping experiment provides a practical use case. Overall, the developed contact-aided InEKF provides better performance in comparison with the quaternion-based EKF as a result of exploiting symmetries present in system. 
    more » « less
  5. Unmanned Aerial Vehicles (UAVs) find increasing use in mission critical tasks both in civilian and military operations. Most UAVs rely on Inertial Measurement Units (IMUs) to calculate vehicle attitude and track vehicle position. Therefore, an incorrect IMU reading can cause a vehicle to destabilize, and possibly even crash. In this paper, we describe how a strategic adversary might be able to introduce spurious IMU values that can deviate a vehicle from its mission-specified path while at the same time evade customary anomaly detection mechanisms, thereby effectively perpetuating a “stealthy attack” on the system. We explore the feasibility of a Deep Neural Network (DNN) that uses a vehicle's state information to calculate the applicable IMU values to perpetrate such an attack. The eventual goal is to cause a vehicle to perturb enough from its mission parameters to compromise mission reliability, while, from the operator's perspective, the vehicle still appears to be operating normally. 
    more » « less