skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Autonomous Visual Assistance for Robot Operations Using a Tethered UAV
This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments.  more » « less
Award ID(s):
1945105
PAR ID:
10313426
Author(s) / Creator(s):
; ;
Editor(s):
Ishigami G., Yoshida K.
Date Published:
Journal Name:
Field and Service Robotics. Springer Proceedings in Advanced Robotics
Volume:
16
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface. 
    more » « less
  2. In this work we address the flexible physical docking-and-release as well as recharging needs for a marsupial system comprising an autonomous tiltrotor hybrid Micro Aerial Vehicle and a high-end legged locomotion robot. Within persistent monitoring and emergency response situations, such aerial / ground robot teams can offer rapid situational awareness by taking off from the mobile ground robot and scouting a wide area from the sky. For this type of operational profile to retain its long-term effectiveness, regrouping via landing and docking of the aerial robot onboard the ground one is a key requirement. Moreover, onboard recharging is a necessity in order to perform systematic missions. We present a framework comprising: a novel landing mechanism with recharging capabilities embedded into its design, an external battery-based recharging extension for our previously developed power-harvesting Micro Aerial Vehicle module, as well as a strategy for the reliable landing and the docking-and-release between the two robots. We specifically address the need for this system to be ferried by a quadruped ground system while remaining reliable during aggressive legged locomotion when traversing harsh terrain. We present conclusive experimental validation studies by deploying our solution on a marsupial system comprising the MiniHawk micro tiltrotor and the Boston Dynamics Spot legged robot. 
    more » « less
  3. Police officers often must work alone in clearing operations, a procedure that involves surveying a building for threats and appropriately responding. A partnership between drone swarms and officers has potential to increase the safety of officers and civilians during these high-stress operations and reduce the risk of harm from hostile persons. This two part study examines aspects of trust, situational awareness, mental demand, performance, and human-robot interaction during law enforcement building clearing operations using either a single drone or a drone swarm. Results indicate that single drone use can increase time for operation, but accuracy and safety of clearing is enhanced. Single drone use saw increased situational awareness, a decrease in number of targets missed, and a moderate level of trust. For drone swarms, results indicate significant differences in mental workload from swarm data feeds compared to single drone feeds but no substantial difference in accuracy of finding targets. 
    more » « less
  4. In this work we address the System-of-Systems reassembling operation of a marsupial team comprising a hybrid Unmanned Aerial Vehicle and a Legged Locomotion robot, relying solely on vision-based systems and assisted by Deep Learning. The target application domain is that of large-scale field surveying operations under the presence of wireless communication disruptions. While most real-world field deployments of multi-robot systems assume some degree of wireless communication to coordinate key tasks such as multi-agent rendezvous, a desirable feature against unrecoverable communication failures or radio degradation due to jamming cyber-attacks is the ability for autonomous systems to robustly execute their mission with onboard perception. This is especially true for marsupial air / ground teams, wherein landing onboard the ground robot is required. We propose a pipeline that relies on Deep Neural Network-based Vehicle-to-Vehicle detection based on aerial views acquired by flying at typical altitudes for Micro Aerial Vehicle-based real-world surveying operations, such as near the border of the 400ft Above Ground Level window. We present the minimal computing and sensing suite that supports its execution onboard a fully autonomous micro-Tiltrotor aircraft which detects, approaches, and lands onboard a Boston Dynamics Spot legged robot. We present extensive experimental studies that validate this marsupial aerial / ground robot’s capacity to safely reassemble while in the airborne scouting phase without the need for wireless communication. 
    more » « less
  5. null (Ed.)
    Unmanned aerial vehicles (UAVs), equipped with a variety of sensors, are being used to provide actionable information to augment first responders’ situational awareness in disaster areas for urban search and rescue (SaR) operations. However, existing aerial robots are unable to sense the occluded spaces in collapsed structures, and voids buried in disaster rubble that may contain victims. In this study, we developed a framework, AiRobSim, to simulate an aerial robot to acquire both aboveground and underground information for post-disaster SaR. The integration of UAV, ground-penetrating radar (GPR), and other sensors, such as global navigation satellite system (GNSS), inertial measurement unit (IMU), and cameras, enables the aerial robot to provide a holistic view of the complex urban disaster areas. The robot-collected data can help locate critical spaces under the rubble to save trapped victims. The simulation framework can serve as a virtual training platform for novice users to control and operate the robot before actual deployment. Data streams provided by the platform, which include maneuver commands, robot states and environmental information, have potential to facilitate the understanding of the decision-making process in urban SaR and the training of future intelligent SaR robots. 
    more » « less