skip to main content


Title: Motion Polytopes in Virtual Reality for Shared Control in Remote Manipulation Applications
In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides.  more » « less
Award ID(s):
1944453
NSF-PAR ID:
10334719
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
8
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Extreme environments, such as search and rescue missions, defusing bombs, or exploring extraterrestrial planets, are unsafe environments for humans to be in. Robots enable humans to explore and interact in these environments through remote presence and teleoperation and virtual reality provides a medium to create immersive and easy-to-use teleoperation interfaces. However, current virtual reality interfaces are still very limited in their capabilities. In this work, we aim to advance robot teleoperation virtual reality interfaces by developing an environment reconstruction methodology capable of recognizing objects in a robot’s environment and rendering high fidelity models inside a virtual reality headset. We compare our proposed environment reconstruction method against traditional point cloud streaming by having operators plan waypoint trajectories to accomplish a pick-and-place task. Overall, our results show that users find our environment reconstruction method more usable and less cognitive work compared to raw point cloud streaming. 
    more » « less
  2. During a natural disaster such as hurricane, earth- quake, or fire, robots have the potential to explore vast areas and provide valuable aid in search & rescue efforts. These scenar- ios are often high-pressure and time-critical with dynamically- changing task goals. One limitation to these large scale deploy- ments is effective human-robot interaction. Prior work shows that collaboration between one human and one robot benefits from shared control. Here we evaluate the efficacy of shared control for human-swarm teaming in an immersive virtual reality environment. Although there are many human-swarm interaction paradigms, few are evaluated in high-pressure settings representative of their intended end use. We have developed an open-source virtual reality testbed for realistic evaluation of human-swarm teaming performance under pressure. We conduct a user study (n=16) comparing four human-swarm paradigms to a baseline condition with no robotic assistance. Shared control significantly reduces the number of instructions needed to operate the robots. While shared control leads to marginally improved team performance in experienced participants, novices perform best when the robots are fully autonomous. Our experimental results suggest that in immersive, high-pressure settings, the benefits of robotic assistance may depend on how the human and robots interact and the human operator’s expertise. 
    more » « less
  3. A growing number of community energy initiatives have enlarged energy-related social networks to the community level. Information provision is deemed as an important role in such programs while energy data disclosure offers a great opportunity to promote energy savings by engaging energy-related actors. However, it is crucial to communicate this data in an effective way. In this research, we develop a virtual reality (VR) integrated eco-feedback system that enables both occupants and facility managers to interact with real-time energy consumption data represented in a community scale 3D immersive environment. This paper presents the detailed front-end and back-end design and development of this novel VR-integrated eco-feedback system using Georgia Tech’s campus as a test case for implementation. The VR-integrated community scale eco-feedback system is capable of visually characterizing differences in energy consumption across a large number of buildings of different types, and will be tested by users in future research. This research, when deployed broadly in cities, may help promote energy-aware behaviors of occupants and timely intervention strategies to achieve energy savings in urban areas. 
    more » « less
  4. Abstract

    ROV operations are mainly performed via a traditional control kiosk and limited data feedback methods, such as the use of joysticks and camera view displays equipped on a surface vessel. This traditional setup requires significant personnel on board (POB) time and imposes high requirements for personnel training. This paper proposes a virtual reality (VR) based haptic-visual ROV teleoperation system that can substantially simplify ROV teleoperation and enhance the remote operator's situational awareness.

    This study leverages the recent development in Mixed Reality (MR) technologies, sensory augmentation, sensing technologies, and closed-loop control, to visualize and render complex underwater environmental data in an intuitive and immersive way. The raw sensor data will be processed with physics engine systems and rendered as a high-fidelity digital twin model in game engines. Certain features will be visualized and displayed via the VR headset, whereas others will be manifested as haptic and tactile cues via our haptic feedback systems. We applied a simulation approach to test the developed system.

    With our developed system, a high-fidelity subsea environment is reconstructed based on the sensor data collected from an ROV including the bathymetric, hydrodynamic, visual, and vehicle navigational measurements. Specifically, the vehicle is equipped with a navigation sensor system for real-time state estimation, an acoustic Doppler current profiler for far-field flow measurement, and a bio-inspired artificial literal-line hydrodynamic sensor system for near-field small-scale hydrodynamics. Optimized game engine rendering algorithms then visualize key environmental features as augmented user interface elements in a VR headset, such as color-coded vectors, to indicate the environmental impact on the performance and function of the ROV. In addition, augmenting environmental feedback such as hydrodynamic forces are translated into patterned haptic stimuli via a haptic suit for indicating drift-inducing flows in the near field. A pilot case study was performed to verify the feasibility and effectiveness of the system design in a series of simulated ROV operation tasks.

    ROVs are widely used in subsea exploration and intervention tasks, playing a critical role in offshore inspection, installation, and maintenance activities. The innovative ROV teleoperation feedback and control system will lower the barrier for ROV pilot jobs.

     
    more » « less
  5. For a wearable robotic arm to autonomously assist a human, it has to be able to stabilize its end-effector in light of the human’s independent activities. This paper presents a method for stabilizing the end-effector in planar assembly and pick-and-place tasks. Ideally, given an accurate positioning of the end effector and the wearable robot attachment point, human disturbances could be compensated by using a simple feedback control strategy. Realistically, system delays in both sensing and actuation suggest a predictive approach. In this work, we characterize the actuators of a wearable robotic arm and estimate these delays using linear models. We then consider the motion of the human arm as an autoregressive process to predict the deviation in the robot’s base position at a time horizon equivalent to the estimated delay. Generating set points for the end-effector using this predictive model, we report reduced position errors of 19.4% (x) and 20.1% (y) compared to a feedback control strategy without prediction. 
    more » « less