skip to main content


Title: Inference of User Qualities in Shared Control
Users play an integral role in the performance of many robotic systems, and robotic systems must account for differences in users to improve collaborative performance. Much of the work in adapting to users has focused on designing teleoperation controllers that adjust to extrinsic user indicators such as force, or intent, but do not adjust to intrinsic user qualities. In contrast, the Human-Robot Interaction community has extensively studied intrinsic user qualities, but results may not rapidly be fed back into autonomy design. Here we provide foundational evidence for a new strategy that augments current shared control, and provide a mechanism to directly feed back results from the HRI community into autonomy design. Our evidence is based on a study examining the impact of the user quality “locus of control” on telepresence robot performance. Our results support our hypothesis that key user qualities can be inferred from human-robot interactions (such as through path deviation or time to completion) and that switching or adaptive autonomies might improve shared control performance.  more » « less
Award ID(s):
1638099
NSF-PAR ID:
10092680
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
IEEE International Conference on Robotics and Automation (ICRA)
Page Range / eLocation ID:
588 to 595
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Most cyber-physical human systems (CPHS) rely on users learning how to interact with the system. Rather, a collaborative CPHS should learn from the user and adapt to them in a way that improves holistic system performance. Accomplishing this requires collaboration between the human-robot/human-computer interaction and the cyber-physical system communities in order to feed back knowledge about users into the design of the CPHS. The requisite user studies, however, are difficult, time consuming, and must be carefully designed. Furthermore, as humans are complex in their interactions with autonomy it is difficult to know, a priori, how many users must participate to attain conclusive results. In this paper we elaborate on our work to infer intrinsic user qualities through human-robot interactions correlated with robot performance in order to adapt the autonomy and improve holistic CPHS performance. We first demonstrate through a study that this idea is feasible. Next, we demonstrate that significant differences between groups of users can impact conclusions particularly where different autonomies are involved. Finally, we also provide our rich, extensive corpus of user study data to the wider community to aid researchers in designing better CPHS. 
    more » « less
  2. This paper presents the design of a wearable robotic forearm for close-range human-robot collaboration. The robot's function is to serve as a lightweight supernumerary third arm for shared workspace activities. We present a functional prototype resulting from an iterative design process including several user studies. An analysis of the robot's kinematics shows an increase in reachable workspace by 246 % compared to the natural human reach. The robot's degrees of freedom and range of motion support a variety of usage scenarios with the robot as a collaborative tool, including self-handovers, fetching objects while the human's hands are occupied, assisting human-human collaboration, and stabilizing an object. We analyze the bio-mechanical loads for these scenarios and find that the design is able to operate within human ergonomic wear limits. We then report on a pilot human-robot interaction study that indicates robot autonomy is more task-time efficient and preferred by users when compared to direct voice-control. These results suggest that the design presented here is a promising configuration for a lightweight wearable robotic augmentation device, and can serve as a basis for further research into human-wearable collaboration. 
    more » « less
  3. In remote applications that mandate human supervision, shared control can prove vital by establishing a harmonious balance between the high-level cognition of a user and the low-level autonomy of a robot. Though in practice, achieving this balance is a challenging endeavor that largely depends on whether the operator effectively interprets the underlying shared control. Inspired by recent works on using immersive technologies to expose the internal shared control, we develop a virtual reality system to visually guide human-in-the-loop manipulation. Our implementation of shared control teleoperation employs end effector manipulability polytopes, which are geometrical constructs that embed joint limit and environmental constraints. These constructs capture a holistic view of the constrained manipulator’s motion and can thus be visually represented as feedback for users on their operable space of movement. To assess the efficacy of our proposed approach, we consider a teleoperation task where users manipulate a screwdriver attached to a robotic arm’s end effector. A pilot study with prospective operators is first conducted to discern which graphical cues and virtual reality setup are most preferable. Feedback from this study informs the final design of our virtual reality system, which is subsequently evaluated in the actual screwdriver teleoperation experiment. Our experimental findings support the utility of using polytopes for shared control teleoperation, but hint at the need for longer-term studies to garner their full benefits as virtual guides. 
    more » « less
  4. Shared autonomy provides an effective framework for human-robot collaboration that takes advantage of the complementary strengths of humans and robots to achieve common goals. Many existing approaches to shared autonomy make restrictive assumptions that the goal space, environment dynamics, or human policy are known a priori, or are limited to discrete action spaces, preventing those methods from scaling to complicated real world environments. We propose a model-free, residual policy learning algorithm for shared autonomy that alleviates the need for these assumptions. Our agents are trained to minimally adjust the human’s actions such that a set of goal-agnostic constraints are satisfied. We test our method in two continuous control environments: Lunar Lander, a 2D flight control domain, and a 6-DOF quadrotor reaching task. In experiments with human and surrogate pilots, our method significantly improves task performance without any knowledge of the human’s goal beyond the constraints. These results highlight the ability of model-free deep reinforcement learning to realize assistive agents suited to continuous control settings with little knowledge of user intent. 
    more » « less
  5. Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method used a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios. 
    more » « less