skip to main content

Title: Examining the Effects of Anticipatory Robot Assistance on Human Decision Making
Collaborative robots that provide anticipatory assistance are able to help people complete tasks more quickly. As anticipatory assistance is provided before help is explicitly requested, there is a chance that this action itself will influence the person’s future decisions in the task. In this work, we investigate whether a robot’s anticipatory assistance can drive people to make choices different from those they would otherwise make. Such a study requires measuring intent, which itself could modify intent, resulting in an observer paradox. To combat this, we carefully designed an experiment to avoid this effect. We considered several mitigations such as the careful choice of which human behavioral signals we use to measure intent and designing unobtrusive ways to obtain these signals. We conducted a user study (𝑁=99) in which participants completed a collaborative object retrieval task: users selected an object and a robot arm retrieved it for them. The robot predicted the user’s object selection from eye gaze in advance of their explicit selection, and then provided either collaborative anticipation (moving toward the predicted object), adversarial anticipation (moving away from the predicted object), or no anticipation (no movement, control condition). We found trends and participant comments suggesting people’s decision making changes more » in the presence of a robot anticipatory motion and this change differs depending on the robot’s anticipation strategy. « less
Authors:
; ; ; ; ;
Editors:
Wagner, A.R.
Award ID(s):
1755823
Publication Date:
NSF-PAR ID:
10273002
Journal Name:
International Conference on Social Robotics (ICSR)
Volume:
LNCS 12483
Page Range or eLocation-ID:
590-603
Sponsoring Org:
National Science Foundation
More Like this
  1. Human-centered environments provide affordances for and require the use of two-handed, or bimanual, manipulations. Robots designed to function in, and physically interact with, these environments have not been able to meet these requirements because standard bimanual control approaches have not accommodated the diverse, dynamic, and intricate coordinations between two arms to complete bimanual tasks. In this work, we enabled robots to more effectively perform bimanual tasks by introducing a bimanual shared-control method. The control method moves the robot’s arms to mimic the operator’s arm movements but provides on-the-fly assistance to help the user complete tasks more easily. Our method usedmore »a bimanual action vocabulary, constructed by analyzing how people perform two-hand manipulations, as the core abstraction level for reasoning about how to assist in bimanual shared autonomy. The method inferred which individual action from the bimanual action vocabulary was occurring using a sequence-to-sequence recurrent neural network architecture and turned on a corresponding assistance mode, signals introduced into the shared-control loop designed to make the performance of a particular bimanual action easier or more efficient. We demonstrate the effectiveness of our method through two user studies that show that novice users could control a robot to complete a range of complex manipulation tasks more successfully using our method compared to alternative approaches. We discuss the implications of our findings for real-world robot control scenarios.« less
  2. Several recent studies have demonstrated the promise of deep visuomotor policies for robot manipulator control. Despite impressive progress, these systems are known to be vulnerable to physical disturbances, such as accidental or adversarial bumps that make them drop the manipulated object. They also tend to be distracted by visual disturbances such as objects moving in the robot’s field of view, even if the disturbance does not physically prevent the execution of the task. In this paper, we propose an approach for augmenting a deep visuomotor policy trained through demonstrations with Task Focused visual Attention (TFA). The manipulation task is specifiedmore »with a natural language text such as “move the red bowl to the left”. This allows the visual attention component to concentrate on the current object that the robot needs to manipulate. We show that even in benign environments, the TFA allows the policy to consistently outperform a variant with no attention mechanism. More importantly, the new policy is significantly more robust: it regularly recovers from severe physical disturbances (such as bumps causing it to drop the object) from which the baseline policy, i.e. with no visual attention, almost never recovers. In addition, we show that the proposed policy performs correctly in the presence of a wide class of visual disturbances, exhibiting a behavior reminiscent of human selective visual attention experiments.« less
  3. Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effectmore »that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.« less
  4. Intelligent mobile robots have recently become able to operate autonomously in large-scale indoor environments for extended periods of time. In this process, mobile robots need the capabilities of both task and motion planning. Task planning in such environments involves sequencing the robot’s high-level goals and subgoals, and typically requires reasoning about the locations of people, rooms, and objects in the environment, and their interactions to achieve a goal. One of the prerequisites for optimal task planning that is often overlooked is having an accurate estimate of the actual distance (or time) a robot needs to navigate from one location tomore »another. State-of-the-art motion planning algorithms, though often computationally complex, are designed exactly for this purpose of finding routes through constrained spaces. In this article, we focus on integrating task and motion planning (TMP) to achieve task-level-optimal planning for robot navigation while maintaining manageable computational efficiency. To this end, we introduce TMP algorithm PETLON (Planning Efficiently for Task-Level-Optimal Navigation), including two configurations with different trade-offs over computational expenses between task and motion planning, for everyday service tasks using a mobile robot. Experiments have been conducted both in simulation and on a mobile robot using object delivery tasks in an indoor office environment. The key observation from the results is that PETLON is more efficient than a baseline approach that pre-computes motion costs of all possible navigation actions, while still producing plans that are optimal at the task level. We provide results with two different task planning paradigms in the implementation of PETLON, and offer TMP practitioners guidelines for the selection of task planners from an engineering perspective.« less
  5. Performing robust goal-directed manipulation tasks remains a crucial challenge for autonomous robots. In an ideal case, shared autonomous control of manipulators would allow human users to specify their intent as a goal state and have the robot reason over the actions and motions to achieve this goal. However, realizing this goal remains elusive due to the problem of perceiving the robot’s environment. We address and describe the problem of axiomatic scene estimation for robot manipulation in cluttered scenes which is the estimation of a tree-structured scene graph describing the configuration of objects observed from robot sensing. We propose generative approachesmore »to scene inference (as the axiomatic particle filter, and the axiomatic scene estimation by Markov chain Monte Carlo based sampler) of the robot’s environment as a scene graph. The result from AxScEs estimation are axioms amenable to goal-directed manipulation through symbolic inference for task planning and collision-free motion planning and execution. We demonstrate the results for goal-directed manipulation of multi-object scenes by a PR2 robot.« less