skip to main content

Title: Motion Polytopes in Virtual Reality for Shared Control in Remote Manipulation Applications
In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings more » support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides. « less
; ; ;
Award ID(s):
Publication Date:
Journal Name:
Frontiers in Robotics and AI
Sponsoring Org:
National Science Foundation
More Like this
  1. Underwater robots, including Remote Operating Vehicles (ROV) and Autonomous Underwater Vehicles (AUV), are currently used to support underwater missions that are either impossible or too risky to be performed by manned systems. In recent years the academia and robotic industry have paved paths for tackling technical challenges for ROV/AUV operations. The level of intelligence of ROV/AUV has increased dramatically because of the recent advances in low-power-consumption embedded computing devices and machine intelligence (e.g., AI). Nonetheless, operating precisely underwater is still extremely challenging to minimize human intervention due to the inherent challenges and uncertainties associated with the underwater environments. Proximity operations, especially those requiring precise manipulation, are still carried out by ROV systems that are fully controlled by a human pilot. A workplace-ready and worker-friendly ROV interface that properly simplifies operator control and increases remote operation confidence is the central challenge for the wide adaptation of ROVs.

    This paper examines the recent advances of virtual telepresence technologies as a solution for lowering the barriers to the human-in-the-loop ROV teleoperation. Virtual telepresence refers to Virtual Reality (VR) related technologies that help a user to feel that they were in a hazardous situation without being present at the actual location. We present amore »pilot system of using a VR-based sensory simulator to convert ROV sensor data into human-perceivable sensations (e.g., haptics). Building on a cloud server for real-time rendering in VR, a less trained operator could possibly operate a remote ROV thousand miles away without losing the minimum situational awareness. The system is expected to enable an intensive human engagement on ROV teleoperation, augmenting abilities for maneuvering and navigating ROV in unknown and less explored subsea regions and works. This paper also discusses the opportunities and challenges of this technology for ad hoc training, workforce preparation, and safety in the future maritime industry. We expect that lessons learned from our work can help democratize human presence in future subsea engineering works, by accommodating human needs and limitations to lower the entrance barrier.

    « less
  2. A growing number of community energy initiatives have enlarged energy-related social networks to the community level. Information provision is deemed as an important role in such programs while energy data disclosure offers a great opportunity to promote energy savings by engaging energy-related actors. However, it is crucial to communicate this data in an effective way. In this research, we develop a virtual reality (VR) integrated eco-feedback system that enables both occupants and facility managers to interact with real-time energy consumption data represented in a community scale 3D immersive environment. This paper presents the detailed front-end and back-end design and development of this novel VR-integrated eco-feedback system using Georgia Tech’s campus as a test case for implementation. The VR-integrated community scale eco-feedback system is capable of visually characterizing differences in energy consumption across a large number of buildings of different types, and will be tested by users in future research. This research, when deployed broadly in cities, may help promote energy-aware behaviors of occupants and timely intervention strategies to achieve energy savings in urban areas.
  3. Virtual reality (VR) systems have been increasingly used in recent years in various domains, such as education and training. Presence, which can be described as ‘the sense of being there’ is one of the most important user experience aspects in VR. There are several components, which may affect the level of presence, such as interaction, visual fidelity, and auditory cues. In recent years, a significant effort has been put into increasing the sense of presence in VR. This study focuses on improving user experience in VR by increasing presence through increased interaction fidelity and enhanced illusions. Interaction in real life includes mutual and bidirectional encounters between two or more individuals through shared tangible objects. However, the majority of VR interaction to date has been unidirectional. This research aims to bridge this gap by enabling bidirectional mutual tangible embodied interactions between human users and virtual characters in world-fixed VR through real-virtual shared objects that extend from virtual world into the real world. I hypothesize that the proposed novel interaction will shrink the boundary between the real and virtual worlds (through virtual characters that affect the physical world), increase the seamlessness of the VR system (enhance the illusion) and the fidelity ofmore »interaction, and increase the level of presence and social presence, enjoyment and engagement. This paper includes the motivation, design and development details of the proposed novel world-fixed VR system along with future directions.« less
  4. This is one of the first accounts for the security analysis of consumer immersive Virtual Reality (VR) systems. This work breaks new ground, coins new terms, and constructs proof of concept implementations of attacks related to immersive VR. Our work used the two most widely adopted immersive VR systems, the HTC Vive, and the Oculus Rift. More specifically, we were able to create attacks that can potentially disorient users, turn their Head Mounted Display (HMD) camera on without their knowledge, overlay images in their field of vision, and modify VR environmental factors that force them into hitting physical objects and walls. Finally, we illustrate through a human participant deception study the success of being able to exploit VR systems to control immersed users and move them to a location in physical space without their knowledge. We term this the Human Joystick Attack. We conclude our work with future research directions and ways to enhance the security of these systems.
  5. We present a novel haptic teleoperation approach that considers not only the safety but also the stability of a teleoperation system. Specifically, we build upon previous work on haptic shared control, which generates a reference haptic feedback that helps the human operator to safely navigate the robot but without taking away their control authority. Crucially, in this approach the force rendered to the user is not directly reflected in the motion of the robot (which is still directly controlled by the user); however, previous work in the area neglected to consider the possible instabilities in feedback loop generated by a user that over-responds to the haptic force. In this paper we introduce a differential constraint on the rendered force that makes the system finite-gain L2 stable; the constraint results in a Quadratically Constrained Quadratic Program (QCQP), for which we provide a closed-form solution. Our constraint is related to, but less restrictive than, the typical passivity constraint used in previous literature. We conducted an experimental simulation in which a human operator flies a UAV near an obstacle to evaluate the proposed method.