We present a novel learning-based trajectory generation algorithm for outdoor robot navigation. Our goal is to compute collision-free paths that also satisfy the environment-specific traversability constraints. Our approach is designed for global planning using limited onboard robot perception in mapless environments while ensuring comprehensive coverage of all traversable directions. Our formulation uses a Conditional Variational Autoencoder (CVAE) generative model that is enhanced with traversability constraints and an optimization formulation used for the coverage. We highlight the benefits of our approach over state-of-the-art trajectory generation approaches and demonstrate its performance in challenging and large outdoor environments, including around buildings, across intersections, along trails, and off-road terrain, using a Clearpath Husky and a Boston Dynamics Spot robot. In practice, our approach results in a 6% improvement in coverage of traversable areas and an 89% reduction in trajectory portions residing in non-traversable regions. Our video is here: https://youtu.be/3eJ2soAzXnU
more »
« less
Best and Worst External Viewpoints for Teleoperation Visual Assistance
A HRI study with 31 expert robot operators established that an external viewpoint from an assisting robot could increase teleoperation performance by 14% to 58% while reducing human error by 87% to 100% This video illustrates those findings with a side-by-side comparison of the best and worst viewpoints for the passability and traversability affordances. The passability scenario uses a small unmanned aerial system as a visual assistant that can reach any viewpoint on the idealized hemisphere surrounding the task action. The traversability scenario uses a small ground robot that is restricted to a subset of viewpoints that are reachable.
more »
« less
- Award ID(s):
- 1945105
- PAR ID:
- 10313425
- Date Published:
- Journal Name:
- Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants' ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture's motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers.more » « less
-
null (Ed.)Unmanned Aerial Vehicle (UAV) flight paths have been shown to communicate meaning to human observers, similar to human gestural communication. This paper presents the results of a UAV gesture perception study designed to assess how observer viewpoint perspective may impact how humans perceive the shape of UAV gestural motion. Robot gesture designers have demonstrated that robots can indeed communicate meaning through gesture; however, many of these results are limited to an idealized range of viewer perspectives and do not consider how the perception of a robot gesture may suffer from obfuscation or self-occlusion from some viewpoints. This paper presents the results of three online user-studies that examine participants’ ability to accurately perceive the intended shape of two-dimensional UAV gestures from varying viewer perspectives. We used a logistic regression model to characterize participant gesture classification accuracy, demonstrating that viewer perspective does impact how participants perceive the shape of UAV gestures. Our results yielded a viewpoint angle threshold from beyond which participants were able to assess the intended shape of a gesture’s motion with 90% accuracy. We also introduce a perceptibility score to capture user confidence, time to decision, and accuracy in labeling and to understand how differences in flight paths impact perception across viewpoints. These findings will enable UAV gesture systems that, with a high degree of confidence, ensure gesture motions can be accurately perceived by human observers.more » « less
-
While robot-assisted minimally invasive surgery (RMIS) procedures afford a variety of benefits over open surgery and manual laparoscopic operations (including increased tool dexterity, reduced patient pain, incision size, trauma and recovery time, and lower infection rates [ 1 ], lack of spatial awareness remains an issue. Typical laparoscopic imaging can lack sufficient depth cues and haptic feedback, if provided, rarely reflects realistic tissue–tool interactions. This work is part of a larger ongoing research effort to reconstruct 3D surfaces using multiple viewpoints in RMIS to increase visual perception. The manual placement and adjustment of multicamera systems in RMIS are nonideal and prone to error [ 2 ], and other autonomous approaches focus on tool tracking and do not consider reconstruction of the surgical scene [ 3 , 4 , 5 ]. The group’s previous work investigated a novel, context-aware autonomous camera positioning method [ 6 ], which incorporated both tool location and scene coverage for multiple camera viewpoint adjustments. In this paper, the authors expand upon this prior work by implementing a streamlined deep reinforcement learning approach between optimal viewpoints calculated using the prior method [ 6 ] which encourages discovery of otherwise unobserved and additional camera viewpoints. Combining the framework and robustness of the previous work with the efficiency and additional viewpoints of the augmentations presented here results in improved performance and scene coverage promising towards real-time implementation.more » « less
-
ABSTRACT Visual recognition of three-dimensional signals, such as faces, is challenging because the signals appear different from different viewpoints. A flexible but cognitively challenging solution is viewpoint-independent recognition, where receivers identify signals from novel viewing angles. Here, we used same/different concept learning to test viewpoint-independent face recognition in Polistes fuscatus, a wasp that uses facial patterns to individually identify conspecifics. We found that wasps use extrapolation to identify novel views of conspecific faces. For example, wasps identify a pair of pictures of the same wasp as the ‘same’, even if the pictures are taken from different views (e.g. one face 0 deg rotation, one face 60 deg rotation). This result is notable because it provides the first evidence of view-invariant recognition via extrapolation in an invertebrate. The results suggest that viewpoint-independent recognition via extrapolation may be a widespread strategy to facilitate individual face recognition.more » « less
An official website of the United States government

