skip to main content

Title: Launching a Micro–Scout UAV from a Mobile Robotic Manipulator Arm
This paper addresses the problem of autonomously deploying an unmanned aerial vehicle in non-trivial settings, by leveraging a manipulator arm mounted on a ground robot, acting as a versatile mobile launch platform. As real-world deployment scenarios for micro aerial vehicles such as searchand- rescue operations often entail exploration and navigation of challenging environments including uneven terrain, cluttered spaces, or even constrained openings and passageways, an often arising problem is that of ensuring a safe take-off location, or safely fitting through narrow openings while in flight. By facilitating launching from the manipulator end-effector, a 6- DoF controllable take-off pose within the arm workspace can be achieved, which allows to properly position and orient the aerial vehicle to initialize the autonomous flight portion of a mission. To accomplish this, we propose a sampling-based planner that respects a) the kinematic constraints of the ground robot / manipulator / aerial robot combination, b) the geometry of the environment as autonomously mapped by the ground robots perception systems, and c) accounts for the aerial robot expected dynamic motion during takeoff. The goal of the proposed planner is to ensure autonomous collision-free initialization of an aerial robotic exploration mission, even within a cluttered constrained environment. At more » the same time, the ground robot with the mounted manipulator can be used to appropriately position the take-off workspace into areas of interest, effectively acting as a carrier launch platform. We experimentally demonstrate this novel robotic capability through a sequence of experiments that encompass a micro aerial vehicle platform carried and launched from a 6-DoF manipulator arm mounted on a four-wheel robot base. « less
Award ID(s):
Publication Date:
Journal Name:
2021 IEEE Conference on Aerospace
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper presents a novel strategy for the autonomous deployment of Micro Aerial Vehicle scouts through constricted aperture-like ingress points, by narrowly fitting and launching them with a high-precision Mobile Manipulation robot. A significant problem during exploration and reconnaissance into highly unstructured environments, such as indoor collapsed ones, is the encountering of impassable areas due to their constricted and rigid nature. We propose that a heterogeneous robotic system-of-systems armed with manipulation capabilities while also ferrying a fleet of micro-sized aerial agents, can deploy the latter through constricted apertures that marginally fit them in size, thus allowing them to act as scouts and resume the reconnaissance mission. This work's contribution is twofold: first, it proposes active-vision based aperture detection to locate candidate ingress points and a hierarchical search-based aperture profile analysis to position a MAV's body through them, and secondly it presents and experimentally demonstrates the novelty of a system-of-systems approach which leverages mobile manipulation to deploy other robots which are otherwise incapable of entering through extremely narrow openings.
  2. This work considers autonomous fruit picking using an aerial grasping robot by tightly integrating vision-based perception and control within a learning framework. The architecture employs a convolutional neural network (CNN) to encode images and vehicle state information. This encoding is passed into a sub-task classifier and associated reference waypoint generator. The classifier is trained to predict the current phase of the task being executed: Staging, Picking, or Reset. Based on the predicted phase, the waypoint generator predicts a set of obstacle-free 6-DOF waypoints, which serve as a reference trajectory for model-predictive control (MPC). By iteratively generating and following these trajectories, the aerial manipulator safely approaches a mock-up goal fruit and removes it from the tree. The proposed approach is validated in 29 flight tests, through a comparison to a conventional baseline approach, and an ablation study on its key features. Overall, the approach achieved comparable success rates to the conventional approach, while reaching the goal faster.
  3. The deep chlorophyll maximum (DCM) layer is an ecologically important feature of the open ocean. The DCM cannot be observed using aerial or satellite remote sensing; thus, in situ observations are essential. Further, understanding the responses of microbes to the environmental processes driving their metabolism and interactions requires observing in a reference frame that moves with a plankton population drifting in ocean currents, i.e., Lagrangian. Here, we report the development and application of a system of coordinated robots for studying planktonic biological communities drifting within the ocean. The presented Lagrangian system uses three coordinated autonomous robotic platforms. The focal platform consists of an autonomous underwater vehicle (AUV) fitted with a robotic water sampler. This platform localizes and drifts within a DCM community, periodically acquiring samples while continuously monitoring the local environment. The second platform is an AUV equipped with environmental sensing and acoustic tracking capabilities. This platform characterizes environmental conditions by tracking the focal platform and vertically profiling in its vicinity. The third platform is an autonomous surface vehicle equipped with satellite communications and subsea acoustic tracking capabilities. While also acoustically tracking the focal platform, this vehicle serves as a communication relay that connects the subsea robot to human operators,more »thereby providing situational awareness and enabling intervention if needed. Deployed in the North Pacific Ocean within the core of a cyclonic eddy, this coordinated system autonomously captured fundamental characteristics of the in situ DCM microbial community in a manner not possible previously.

    « less
  4. Abstract

    We autonomously directed a small quadcopter package delivery Uncrewed Aerial Vehicle (UAV) or “drone” to take off, fly a specified route, and land for a total of 209 flights while varying a set of operational parameters. The vehicle was equipped with onboard sensors, including GPS, IMU, voltage and current sensors, and an ultrasonic anemometer, to collect high-resolution data on the inertial states, wind speed, and power consumption. Operational parameters, such as commanded ground speed, payload, and cruise altitude, were varied for each flight. This large data set has a total flight time of 10 hours and 45 minutes and was collected from April to October of 2019 covering a total distance of approximately 65 kilometers. The data collected were validated by comparing flights with similar operational parameters. We believe these data will be of great interest to the research and industrial communities, who can use the data to improve UAV designs, safety, and energy efficiency, as well as advance the physical understanding of in-flight operations for package delivery drones.

  5. Ishigami G., Yoshida K. (Ed.)
    This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improvedmore »visual assistance quality based on trustworthy risk-aware motion in cluttered environments.« less