skip to main content


Title: Autonomous Visual Assistance for Robot Operations Using a Tethered UAV
This paper develops an autonomous tethered aerial visual assistant for robot operations in unstructured or confined environments. Robotic tele-operation in remote environments is difficult due to the lack of sufficient situational awareness, mostly caused by stationary and limited field-of-view and lack of depth perception from the robot’s onboard camera. The emerging state of the practice is to use two robots, a primary and a secondary that acts as a visual assistant to overcome the perceptual limitations of the onboard sensors by providing an external viewpoint. However, problems exist when using a tele-operated visual assistant: extra manpower, manually chosen suboptimal viewpoint, and extra teamwork demand between primary and secondary operators. In this work, we use an autonomous tethered aerial visual assistant to replace the secondary robot and operator, reducing the human-robot ratio from 2:2 to 1:2. This visual assistant is able to autonomously navigate through unstructured or confined spaces in a risk-aware manner, while continuously maintaining good viewpoint quality to increase the primary operator’s situational awareness. With the proposed co-robots team, tele-operation missions in nuclear operations, bomb squad, disaster robots, and other domains with novel tasks or highly occluded environments could benefit from reduced manpower and teamwork demand, along with improved visual assistance quality based on trustworthy risk-aware motion in cluttered environments.  more » « less
Award ID(s):
1945105
NSF-PAR ID:
10313426
Author(s) / Creator(s):
; ;
Editor(s):
Ishigami G., Yoshida K.
Date Published:
Journal Name:
Field and Service Robotics. Springer Proceedings in Advanced Robotics
Volume:
16
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Rapid advancements in Artificial Intelligence have shifted the focus from traditional human-directed robots to fully autonomous ones that do not require explicit human control. These are commonly referred to as Human-on-the-Loop (HotL) systems. Transparency of HotL systems necessitates clear explanations of autonomous behavior so that humans are aware of what is happening in the environment and can understand why robots behave in a certain way. However, in complex multi-robot environments, especially those in which the robots are autonomous and mobile, humans may struggle to maintain situational awareness. Presenting humans with rich explanations of autonomous behavior tends to overload them with lots of information and negatively affect their understanding of the situation. Therefore, explaining the autonomous behavior of multiple robots creates a design tension that demands careful investigation. This paper examines the User Interface (UI) design trade-offs associated with providing timely and detailed explanations of autonomous behavior for swarms of small Unmanned Aerial Systems (sUAS) or drones. We analyze the impact of UI design choices on human awareness of the situation. We conducted multiple user studies with both inexperienced and expert sUAS operators to present our design solution and initial guidelines for designing the HotL multi-sUAS interface. 
    more » « less
  2. In this work we address the flexible physical docking-and-release as well as recharging needs for a marsupial system comprising an autonomous tiltrotor hybrid Micro Aerial Vehicle and a high-end legged locomotion robot. Within persistent monitoring and emergency response situations, such aerial / ground robot teams can offer rapid situational awareness by taking off from the mobile ground robot and scouting a wide area from the sky. For this type of operational profile to retain its long-term effectiveness, regrouping via landing and docking of the aerial robot onboard the ground one is a key requirement. Moreover, onboard recharging is a necessity in order to perform systematic missions. We present a framework comprising: a novel landing mechanism with recharging capabilities embedded into its design, an external battery-based recharging extension for our previously developed power-harvesting Micro Aerial Vehicle module, as well as a strategy for the reliable landing and the docking-and-release between the two robots. We specifically address the need for this system to be ferried by a quadruped ground system while remaining reliable during aggressive legged locomotion when traversing harsh terrain. We present conclusive experimental validation studies by deploying our solution on a marsupial system comprising the MiniHawk micro tiltrotor and the Boston Dynamics Spot legged robot. 
    more » « less
  3. While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [ 1 ], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [ 2 ], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [ 3 , 4 , 5 ]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [ 6 ], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [ 6 ] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation. 
    more » « less
  4. The Industrial Internet of Things has increased the number of sensors permanently installed in industrial plants. Yet there will be gaps in coverage due to broken sensors or sparce density in very large plants, such as in the petrochemical industry. Modern emergency response operations are beginning to use Small Unmanned Aerial Systems (sUAS) as remote sensors to provide rapid improved situational awareness. Ground-based sensors are an integral component of overall situational awareness platforms, as they can provide longer-term persistent monitoring that aerial drones are unable to provide. Squishy Robotics and the Berkeley Emergent Space Tensegrities Laboratory have developed hardware and a framework for rapidly deploying sensor robots for integrated ground-aerial disaster response. The semi-autonomous delivery of sensors using tensegrity (tension-integrity) robotics uses structures that are flexible, lightweight, and have high stiffness-to-weight ratios, making them ideal candidates for robust high-altitude deployments. Squishy Robotics has developed a tensegrity robot for commercial use in Hazardous Materials (HazMat) scenarios that is capable of being deployed from commercial drones or other aircraft. Squishy Robots have been successfully deployed with a delicate sensing and communication payload of up to 1,000 ft. This paper describes the framework for optimizing the deployment of emergency sensors spatially over time. AI techniques (e.g., Long Short-Term Memory neural networks) identify regions where sensors would be most valued without requiring humans to enter the potentially dangerous area. The cost function for optimization considers costs of false-positive and false-negative errors. Decisions on mitigation include shutting down the plant or evacuating the local community. The Expected Value of Information (EVI) is used to identify the most valuable type and location of physical sensors to be deployed to increase the decision-analytic value of a sensor network. A case study using data from the Tennessee Eastman process dataset of a chemical plant displayed in OSI Soft is provided. 
    more » « less
  5. Abstract

    The potential impact of autonomous robots on everyday life is evident in emerging applications such as precision agriculture, search and rescue, and infrastructure inspection. However, such applications necessitate operation in unknown and unstructured environments with a broad and sophisticated set of objectives, all under strict computation and power limitations. We therefore argue that the computational kernels enabling robotic autonomy must bescheduledandoptimizedto guarantee timely and correct behavior, while allowing for reconfiguration of scheduling parameters at runtime. In this paper, we consider a necessary first step towards this goal ofcomputational awarenessin autonomous robots: an empirical study of a base set of computational kernels from the resource management perspective. Specifically, we conduct a data-driven study of the timing, power, and memory performance of kernels for localization and mapping, path planning, task allocation, depth estimation, and optical flow, across three embedded computing platforms. We profile and analyze these kernels to provide insight into scheduling and dynamic resource management for computation-aware autonomous robots. Notably, our results show that there is a correlation of kernel performance with a robot’s operational environment, justifying the notion of computation-aware robots and why our work is a crucial step towards this goal.

     
    more » « less