Extreme environments, such as search and rescue missions, defusing bombs, or exploring extraterrestrial planets, are unsafe environments for humans to be in. Robots enable humans to explore and interact in these environments through remote presence and teleoperation and virtual reality provides a medium to create immersive and easy-to-use teleoperation interfaces. However, current virtual reality interfaces are still very limited in their capabilities. In this work, we aim to advance robot teleoperation virtual reality interfaces by developing an environment reconstruction methodology capable of recognizing objects in a robot’s environment and rendering high fidelity models inside a virtual reality headset. We compare our proposed environment reconstruction method against traditional point cloud streaming by having operators plan waypoint trajectories to accomplish a pick-and-place task. Overall, our results show that users find our environment reconstruction method more usable and less cognitive work compared to raw point cloud streaming.
more »
« less
Motion Polytopes in Virtual Reality for Shared Control in Remote Manipulation Applications
In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides.
more »
« less
- Award ID(s):
- 1944453
- PAR ID:
- 10334719
- Date Published:
- Journal Name:
- Frontiers in Robotics and AI
- Volume:
- 8
- ISSN:
- 2296-9144
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A fundamental challenge of shared autonomy is to use high-DoF robots to assist, rather than hinder, humans by first inferring user intent and then empowering the user to achieve their intent. Although successful, prior methods either rely heavily on a priori knowledge of all possible human intents or require many demonstrations and interactions with the human to learn these intents before being able to assist the user. We propose and study a zero-shot, vision-only shared autonomy (VOSA) framework designed to allow robots to use end-effector vision to estimate zero-shot human intents in conjunction with blended control to help humans accomplish manipulation tasks with unknown and dynamically changing object locations. To demonstrate the effectiveness of our VOSA framework, we instantiate a simple version of VOSA on a Kinova Gen3 manipulator and evaluate our system by conducting a user study on three tabletop manipulation tasks. The performance of VOSA matches that of an oracle baseline model that receives privileged knowledge of possible human intents while also requiring significantly less effort than unassisted teleoperation. In more realistic settings, where the set of possible human intents is fully or partially unknown, we demonstrate that VOSA requires less human effort and time than baseline approaches while being preferred by a majority of the participants. Our results demonstrate the efficacy and efficiency of using off-the-shelf vision algorithms to enable flexible and beneficial shared control of a robot manipulator. Code and videos available here: https://sites.google.com/view/zeroshot-sharedautonomy/homemore » « less
-
During a natural disaster such as hurricane, earth- quake, or fire, robots have the potential to explore vast areas and provide valuable aid in search & rescue efforts. These scenar- ios are often high-pressure and time-critical with dynamically- changing task goals. One limitation to these large scale deploy- ments is effective human-robot interaction. Prior work shows that collaboration between one human and one robot benefits from shared control. Here we evaluate the efficacy of shared control for human-swarm teaming in an immersive virtual reality environment. Although there are many human-swarm interaction paradigms, few are evaluated in high-pressure settings representative of their intended end use. We have developed an open-source virtual reality testbed for realistic evaluation of human-swarm teaming performance under pressure. We conduct a user study (n=16) comparing four human-swarm paradigms to a baseline condition with no robotic assistance. Shared control significantly reduces the number of instructions needed to operate the robots. While shared control leads to marginally improved team performance in experienced participants, novices perform best when the robots are fully autonomous. Our experimental results suggest that in immersive, high-pressure settings, the benefits of robotic assistance may depend on how the human and robots interact and the human operator’s expertise.more » « less
-
A growing number of community energy initiatives have enlarged energy-related social networks to the community level. Information provision is deemed as an important role in such programs while energy data disclosure offers a great opportunity to promote energy savings by engaging energy-related actors. However, it is crucial to communicate this data in an effective way. In this research, we develop a virtual reality (VR) integrated eco-feedback system that enables both occupants and facility managers to interact with real-time energy consumption data represented in a community scale 3D immersive environment. This paper presents the detailed front-end and back-end design and development of this novel VR-integrated eco-feedback system using Georgia Tech’s campus as a test case for implementation. The VR-integrated community scale eco-feedback system is capable of visually characterizing differences in energy consumption across a large number of buildings of different types, and will be tested by users in future research. This research, when deployed broadly in cities, may help promote energy-aware behaviors of occupants and timely intervention strategies to achieve energy savings in urban areas.more » « less
-
null (Ed.)We present a novel haptic teleoperation approach that considers not only the safety but also the stability of a teleoperation system. Specifically, we build upon previous work on haptic shared control, which generates a reference haptic feedback that helps the human operator to safely navigate the robot but without taking away their control authority. Crucially, in this approach the force rendered to the user is not directly reflected in the motion of the robot (which is still directly controlled by the user); however, previous work in the area neglected to consider the possible instabilities in feedback loop generated by a user that over-responds to the haptic force. In this paper we introduce a differential constraint on the rendered force that makes the system finite-gain L2 stable; the constraint results in a Quadratically Constrained Quadratic Program (QCQP), for which we provide a closed-form solution. Our constraint is related to, but less restrictive than, the typical passivity constraint used in previous literature. We conducted an experimental simulation in which a human operator flies a UAV near an obstacle to evaluate the proposed method.more » « less
An official website of the United States government

