skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Built to Order: A Virtual Reality Interface for Assigning High-Level Assembly Goals to Remote Robots
Many real-world factory tasks require human expertise and involvement for robot control. However, traditional robot operation requires that users undergo extensive and time-consuming robot-specific training to understand the specific constraints of each robot. We describe a user interface that supports a user in assigning and monitoring remote assembly tasks in Virtual Reality (VR) through high-level goal-based instructions rather than low-level direct control. Our user interface is part of a testbed in which a motion-planning algorithm determines, verifies, and executes robot-specific trajectories in simulation.  more » « less
Award ID(s):
2037101
PAR ID:
10482127
Author(s) / Creator(s):
; ; ; ; ; ; ;
Publisher / Repository:
ACM
Date Published:
Journal Name:
SUI '23: Proceedings of the 2023 ACM Symposium on Spatial User Interaction
ISBN:
9798400702815
Page Range / eLocation ID:
1 to 2
Format(s):
Medium: X
Location:
Sydney NSW Australia
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The recent development of Robot-Assisted Minimally Invasive Surgery (RAMIS) has brought much benefit to ease the performance of complex Minimally Invasive Surgery (MIS) tasks and lead to more clinical outcomes. Compared to direct master-slave manipulation, semi-autonomous control for the surgical robot can enhance the efficiency of the operation, particularly for repetitive tasks. However, operating in a highly dynamic in-vivo environment is complex. Supervisory control functions should be included to ensure flexibility and safety during the autonomous control phase. This paper presents a haptic rendering interface to enable supervised semi-autonomous control for a surgical robot. Bayesian optimization is used to tune user-specific parameters during the surgical training process. User studies were conducted on a customized simulator for validation. Detailed comparisons are made between with and without the supervised semi-autonomous control mode in terms of the number of clutching events, task completion time, master robot end-effector trajectory and average control speed of the slave robot. The effectiveness of the Bayesian optimization is also evaluated, demonstrating that the optimized parameters can significantly improve users' performance. Results indicate that the proposed control method can reduce the operator's workload and enhance operation efficiency. 
    more » « less
  2. We present a prototype virtual reality user interface for robot teleoperation that supports high-level specification of 3D object positions and orientations in remote assembly tasks. Users interact with virtual replicas of task objects. They asynchronously assign multiple goals in the form of 6DoF destination poses without needing to be familiar with specific robots and their capabilities, and manage and monitor the execution of these goals. The user interface employs two different spatiotemporal visualizations for assigned goals: one represents all goals within the user’s workspace (Aggregated View), while the other depicts each goal within a separate world in miniature (Timeline View). We conducted a user study of the interface without the robot system to compare how these visualizations affect user efficiency and task load. The results show that while the Aggregated View helped the participants finish the task faster, the participants preferred the Timeline View. 
    more » « less
  3. Interfacing between robots and humans has come a long way in the past few years, and new methods for smart, robust interaction are needed. Typically, a technician has to program a routine for a robot in order for the robot to be useful. This puts up a significant barrier to entry into the field of automating tasks using robots—not only is a technician and a computer required, but the robot is not adaptive to the immediate needs of the user. The robot is only capable of executing a pre-determined task and for any change to be made the entire system needs to be paused. This project seeks to bridge the gap between user and robot interface, creating an easy-to-use system that allows for adaptive robot control. Using a combination of computer vision and a monocular camera system and integrated LiDAR sensor on an iPhone, gesture recognition and pose estimation was conducted within an independent system to control the Baxter humanoid robot. The gathered data was sent wirelessly to the robot to be interpreted and then replay actions performed by the user. 
    more » « less
  4. Objective: Despite advances in human-machine interface design, we lack the ability to give people precise and fast control over high degree of freedom (DOF) systems, like robotic limbs. Attempts to improve control often focus on the static map that links user input to device commands; hypothesizing that the user’s skill acquisition can be improved by finding an intuitive map. Here we investigate what map features affect skill acquisition. Methods: Each of our 36 participants used one of three maps that translated their 19-dimensional finger movement into the 5 robot joints and used the robot to pick up and move objects. The maps were each constructed to maximize a different control principle to reveal what features are most critical for user performance. 1) Principal Components Analysis to maximize the linear capture of finger variance, 2) our novel Egalitarian Principal Components Analysis to maximize the equality of variance captured by each component and 3) a Nonlinear Autoencoder to achieve both high variance capture and less biased variance allocation across latent dimensions Results: Despite large differences in the mapping structures there were no significant differences in group performance. Conclusion: Participants’ natural aptitude had a far greater effect on performance than the map. Significance: Robot-user interfaces are becoming increasingly common and require new designs to make them easier to operate. Here we show that optimizing the map may not be the appropriate target to improve operator skill. Therefore, further efforts should focus on other aspects of the robot-user-interface such as feedback or learning environment. 
    more » « less
  5. This paper presents a formulation for swarm control and high-level task planning that is dynamically responsive to user commands and adaptable to environmental changes. We design an end-to-end pipeline from a tactile tablet interface for user commands to onboard control of robotic agents based on decentralized ergodic coverage. Our approach demonstrates reliable and dynamic control of a swarm collective through the use of ergodic specifications for planning and executing agent trajectories as well as responding to user and external inputs. We validate our approach in a virtual reality simulation environment objectives in real-time. and in real-world experiments at the DARPA OFFSET Urban Swarm Challenge FX3 field tests with a robotic swarm where user-based control of the swarm and mission-based tasks require a dynamic and flexible response to changing conditions and objectives in real-time. 
    more » « less