Users play an integral role in the performance of many robotic systems, and robotic systems must account for differences in users to improve collaborative performance. Much of the work in adapting to users has focused on designing teleoperation controllers that adjust to extrinsic user indicators such as force, or intent, but do not adjust to intrinsic user qualities. In contrast, the Human-Robot Interaction community has extensively studied intrinsic user qualities, but results may not rapidly be fed back into autonomy design. Here we provide foundational evidence for a new strategy that augments current shared control, and provide a mechanism to directly feed back results from the HRI community into autonomy design. Our evidence is based on a study examining the impact of the user quality “locus of control” on telepresence robot performance. Our results support our hypothesis that key user qualities can be inferred from human-robot interactions (such as through path deviation or time to completion) and that switching or adaptive autonomies might improve shared control performance.
more »
« less
Perception-Motion Coupling in Active Telepresence: Human Behavior and Teleoperation Interface Design
Teleoperation enables complex robot platforms to perform tasks beyond the scope of the current state-of-the-art robot autonomy by imparting human intelligence and critical thinking to these operations. For seamless control of robot platforms, it is essential to facilitate optimal situational awareness of the workspace for the operator through active telepresence cameras. However, the control of these active telepresence cameras adds an additional degree of complexity to the task of teleoperation. In this paper we present our results from the user study that investigates: (1) how the teleoperator learns or adapts to performing the tasks via active cameras modeled after camera placements on the TRINA humanoid robot; (2) the perception-action coupling operators implement to control active telepresence cameras, and (3) the camera preferences for performing the tasks. These findings from the human motion analysis and post-study survey will help us determine desired design features for robot teleoperation interfaces and assistive autonomy.
more »
« less
- PAR ID:
- 10448494
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- Volume:
- 12
- Issue:
- 3
- ISSN:
- 2573-9522
- Page Range / eLocation ID:
- 1 to 24
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Teleoperation enables controlling complex robot systems remotely, providing the ability to impart human expertise from a distance. However, these interfaces can be complicated to use as it is difficult to contextualize information about robot motion in the workspace from the limited camera feedback. Thus, it is required to study the best manner in which assistance can be provided to the operator that reduces interface complexity and effort required for teleoperation. Some techniques that provide assistance to the operator while freeform teleoperating include: (1) perception augmentation, like augmented reality visual cues and additional camera angles, increasing the information available to the operator; (2) action augmentation, like assistive autonomy and control augmentation, optimized to reduce the effort required by the operator while teleoperating. In this article, we investigate: (1) which aspects of dexterous telemanipulation require assistance; (2) the impact of perception and action augmentation in improving teleoperation performance; and (3) what factors impact the usage of assistance and how to tailor these interfaces based on the operators’ needs and characteristics. The findings from this user study and resulting post-study surveys will help identify task-based and user-preferred perception and augmentation features for teleoperation assistance.more » « less
-
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots that learn to act autonomously in the real world.more » « less
-
Supervisory control of a humanoid robot in a manipulation task requires coordination of remote perception with robot action, which becomes more demanding with multiple moving cameras available for task supervision. We explore the use of autonomous camera control and selection to reduce operator workload and improve task performance in a supervisory control task. We design a novel approach to autonomous camera selection and control, and evaluate the approach in a user study which revealed that autonomous camera control does improve task performance and operator experience, but autonomous camera selection requires further investigation to benefit the operator’s confidence and maintain trust in the robot autonomy.more » « less
-
On-orbit servicing of satellites is complicated by the fact that almost all existing satellites were not designed to be serviced. This creates a number of challenges, one of which is to cut and partially remove the protective thermal blanketing that encases a satellite prior to performing the servicing operation. A human operator on Earth can perform this task telerobotically, but must overcome difficulties presented by the multi-second round-trip telemetry delay between the satellite and the operator and the limited, or even obstructed, views from the available cameras. This paper reports the results of ground-based experiments with trained NASA robot teleoperators to compare our recently-reported augmented virtuality visualization to the conventional camera-based visualization. We also compare the master console of a da Vinci surgical robot to the conventional teleoperation interface. The results show that, for the cutting task, the augmented virtuality visualization can improve operator performance compared to the conventional visualization, but that operators are more proficient with the conventional control interface than with the da Vinci master console.more » « less