skip to main content


Title: Image-based estimation, planning, and control for high-speed flying through multiple openings

This article focuses on enabling an aerial robot to fly through multiple openings at high speed using image-based estimation, planning, and control. State-of-the-art approaches assume that the robot’s global translational variables (e.g., position and velocity) can either be measured directly with external localization sensors or estimated onboard. Unfortunately, estimating the translational variables may be impractical because modeling errors and sensor noise can lead to poor performance. Furthermore, monocular-camera-based pose estimation techniques typically require a model of the gap (window) in order to handle the unknown scale. Herein, a new scheme for image-based estimation, aggressive-maneuvering trajectory generation, and motion control is developed for multi-rotor aerial robots. The approach described does not rely on measurement of the translational variables and does not require the model of the gap or window. First, the robot dynamics are expressed in terms of the image features that are invariant to rotation (invariant features). This step decouples the robot’s attitude and keeps the invariant features in the flat output space of the differentially flat system. Second, an optimal trajectory is efficiently generated in real time to obtain the dynamically-feasible trajectory for the invariant features. Finally, a controller is designed to enable real-time, image-based tracking of the trajectory. The performance of the estimation, planning, and control scheme is validated in simulations and through 80 successful experimental trials. Results show the ability to successfully fly through two narrow openings, where the estimation and planning computation and motion control from one opening to the next are performed in real time on the robot.

 
more » « less
PAR ID:
10547155
Author(s) / Creator(s):
 ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
The International Journal of Robotics Research
Volume:
39
Issue:
9
ISSN:
0278-3649
Format(s):
Medium: X Size: p. 1122-1137
Size(s):
p. 1122-1137
Sponsoring Org:
National Science Foundation
More Like this
  1. A discrete time, optimal trajectory planning scheme for position trajectory generation of a vehicle is given here, considering the mission duration as a free variable. The vehicle is actuated in three rotational degrees of freedom and one translational degree of freedom. This model is applicable to vehicles that have a body-fixed thrust vector direction for translational motion control, including fixed-wing and rotorcraft unmanned aerial vehicles (UAVs), unmanned underwater vehicles (UUVs) and spacecraft. The lightweight scheme proposed here generates the trajectory in inertial coordinates, and is intended for real time, on-the-go applications. The unspecified terminal time can be considered as an additional design parameter. This is done by deriving the optimality conditions in a discrete time setting, which results in the discrete transversality condition. The trajectory starts from an initial position and reaches a desired final position in an unspecified final time that ensures the cost on state and control is optimized. The trajectory generated by this scheme can be considered as the desired trajectory for a tracking control scheme. Numerical simulation results validate the performance of this trajectory generation scheme used in conjunction with a nonlinear tracking control scheme. 
    more » « less
  2. Performing robust goal-directed manipulation tasks remains a crucial challenge for autonomous robots. In an ideal case, shared autonomous control of manipulators would allow human users to specify their intent as a goal state and have the robot reason over the actions and motions to achieve this goal. However, realizing this goal remains elusive due to the problem of perceiving the robot’s environment. We address and describe the problem of axiomatic scene estimation for robot manipulation in cluttered scenes which is the estimation of a tree-structured scene graph describing the configuration of objects observed from robot sensing. We propose generative approaches to scene inference (as the axiomatic particle filter, and the axiomatic scene estimation by Markov chain Monte Carlo based sampler) of the robot’s environment as a scene graph. The result from AxScEs estimation are axioms amenable to goal-directed manipulation through symbolic inference for task planning and collision-free motion planning and execution. We demonstrate the results for goal-directed manipulation of multi-object scenes by a PR2 robot. 
    more » « less
  3. NA (Ed.)
    In this paper, we investigate the operation of an aerial manipulator system, namely an Unmanned Aerial Vehicle (UAV) equipped with a controllable arm with two degrees of freedom to carry out actuation tasks on the fly. Our solution is based on employing a Q-learning method to control the trajectory of the tip of the arm, also called end-effector. More specifically, we develop a motion planning model based on Time To Collision (TTC), which enables a quadrotor UAV to navigate around obstacles while ensuring the manipulator’s reachability. Additionally, we utilize a model-based Q-learning model to independently track and control the desired trajectory of the manipulator’s end-effector, given an arbitrary baseline trajectory for the UAV platform. Such a combination enables a variety of actuation tasks such as high-altitude welding, structural monitoring and repair, battery replacement, gutter cleaning, sky scrapper cleaning, and power line maintenance in hard-to-reach and risky environments while retaining compatibility with flight control firmware. Our RL-based control mechanism results in a robust control strategy that can handle uncertainties in the motion of the UAV, offering promising performance. Specifically, our method achieves 92% accuracy in terms of average displacement error (i.e. the mean distance between the target and obtained trajectory points) using Q-learning with 15,000 episodes. 
    more » « less
  4. Pai Zheng (Ed.)
    Abstract

    A significant challenge in human–robot collaboration (HRC) is coordinating robot and human motions. Discoordination can lead to production delays and human discomfort. Prior works seek coordination by planning robot paths that consider humans or their anticipated occupancy as static obstacles, making them nearsighted and prone to entrapment by human motion. This work presents the spatio-temporal avoidance of predictions-prediction and planning framework (STAP-PPF) to improve robot–human coordination in HRC. STAP-PPF predicts multi-step human motion sequences based on the locations of objects the human manipulates. STAP-PPF then proactively determines time-optimal robot paths considering predicted human motion and robot speed restrictions anticipated according to the ISO15066 speed and separation monitoring (SSM) mode. When executing robot paths, STAP-PPF continuously updates human motion predictions. In real-time, STAP-PPF warps the robot’s path to account for continuously updated human motion predictions and updated SSM effects to mitigate delays and human discomfort. Results show the STAP-PPF generates robot trajectories of shorter duration. STAP-PPF robot trajectories also adapted better to real-time human motion deviation. STAP-PPF robot trajectories also maintain greater robot/human separation throughout tasks requiring close human–robot interaction. Tests with an assembly sequence demonstrate STAP-PPF’s ability to predict multi-step human tasks and plan robot motions for the sequence. STAP-PPF also most accurately estimates robot trajectory durations, within 30% of actual, which can be used to adapt the robot sequencing to minimize disruption.

     
    more » « less
  5.  
    more » « less