skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Methods for Expressing Robot Intent for Human–Robot Collaboration in Shared Workspaces
Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness of three motion-based and three light-based intent signals as well as the overall level of comfort participants felt while working with the robot to sort colored blocks on the tabletop. Although not significant, our findings suggest that the light signal located closest to the workspace—an LED bracelet located closest to the robot’s end effector—was the most noticeable and least confusing to participants. These findings can be leveraged to support human–robot collaborations in shared spaces.  more » « less
Award ID(s):
1763469
PAR ID:
10345731
Author(s) / Creator(s):
; ; ; ; ; ; ;
Date Published:
Journal Name:
ACM Transactions on Human-Robot Interaction
Volume:
10
Issue:
4
ISSN:
2573-9522
Page Range / eLocation ID:
1 to 27
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Robots are increasingly being introduced into domains where they assist or collaborate with human counterparts. There is a growing body of literature on how robots might serve as collaborators in creative activities, but little is known about the factors that shape human perceptions of robots as creative collaborators. This paper investigates the effects of a robot’s social behaviors on people’s creative thinking and their perceptions of the robot. We developed an interactive system to facilitate collaboration between a human and a robot in a creative activity. We conducted a user study (n = 12), in which the robot and adult participants took turns to create compositions using tangram pieces projected on a shared workspace. We observed four human behavioral traits related to creativity in the interaction: accepting robot inputs as inspiration, delegating the creative lead to the robot, communicating creative intents, and being playful in the creation. Our findings suggest designs for co-creation in social robots that consider the adversarial effect of giving the robot too much control in creation, as well as the role playfulness plays in the creative process. 
    more » « less
  2. Wagner, A.R.; null (Ed.)
    Collaborative robots that provide anticipatory assistance are able to help people complete tasks more quickly. As anticipatory assistance is provided before help is explicitly requested, there is a chance that this action itself will influence the person’s future decisions in the task. In this work, we investigate whether a robot’s anticipatory assistance can drive people to make choices different from those they would otherwise make. Such a study requires measuring intent, which itself could modify intent, resulting in an observer paradox. To combat this, we carefully designed an experiment to avoid this effect. We considered several mitigations such as the careful choice of which human behavioral signals we use to measure intent and designing unobtrusive ways to obtain these signals. We conducted a user study (𝑁=99) in which participants completed a collaborative object retrieval task: users selected an object and a robot arm retrieved it for them. The robot predicted the user’s object selection from eye gaze in advance of their explicit selection, and then provided either collaborative anticipation (moving toward the predicted object), adversarial anticipation (moving away from the predicted object), or no anticipation (no movement, control condition). We found trends and participant comments suggesting people’s decision making changes in the presence of a robot anticipatory motion and this change differs depending on the robot’s anticipation strategy. 
    more » « less
  3. The field of human-robot interaction has been rapidly expanding but an ever-present obstacle facing this field is developing accessible, reliable, and effective forms of communication. It is often imperative to the efficacy of the robot and the overall human-robot interaction that a robot be capable of expressing information about itself to humans in the environment. Amidst the evolving approaches to this obstacle is the use of light as a communication modality. Light-based communication effectively captures attention, can be seen at a distance, and is commonly utilized in our daily lives. Our team explored the ways light-based signals on robots are being used to improve human understanding of robot operating state. In other words, we sought to determine how light-based signals are being used to help individuals identify the conditions (e.g., capabilities, goals, needs) that comprise and dictate a robot’s current functionality. We identified four operating states (e.g., “Blocked”, “Error”, “Seeking Interaction”, “Not Seeking Interaction”) in which light is utilized to increase individuals’ understanding of the robot’s operations. These operating states are expressed through manipulation of three visual dimensions of the onboard lighting features of robots (e.g., color, pattern of lighting, frequency of pattern). In our work, we outline how these dimensions vary across operating states and the effect they have on human understanding. We also provide potential explanations for the importance of each dimension. Additionally, we discuss the main shortcomings of this technology. The first is the overlapping use of combinations of dimensions across operating states. The remainder relate to the difficulties of leveraging color to convey information. Finally, we provide considerations on how this technology might be improved going into the future through the standardization of light-based signals and increasing the amount of information provided within interactions between agents. 
    more » « less
  4. We describe a physical interactive system for human-robot collaborative design (HRCD) consisting of a tangible user interface (TUI) and a robotic arm that simultaneously manipulates the TUI with the human designer. In an observational study of 12 participants exploring a complex design problem together with the robot, we find that human designers have to negotiate both the physical and the creative space with the machine. They also often ascribe social meaning to the robot's pragmatic behaviors. Based on these findings, we propose four considerations for future HRCD systems: managing the shared workspace, communicating preferences about design goals, respecting different design styles, and taking into account the social meaning of design acts. 
    more » « less
  5. Performing robust goal-directed manipulation tasks remains a crucial challenge for autonomous robots. In an ideal case, shared autonomous control of manipulators would allow human users to specify their intent as a goal state and have the robot reason over the actions and motions to achieve this goal. However, realizing this goal remains elusive due to the problem of perceiving the robot’s environment. We address and describe the problem of axiomatic scene estimation for robot manipulation in cluttered scenes which is the estimation of a tree-structured scene graph describing the configuration of objects observed from robot sensing. We propose generative approaches to scene inference (as the axiomatic particle filter, and the axiomatic scene estimation by Markov chain Monte Carlo based sampler) of the robot’s environment as a scene graph. The result from AxScEs estimation are axioms amenable to goal-directed manipulation through symbolic inference for task planning and collision-free motion planning and execution. We demonstrate the results for goal-directed manipulation of multi-object scenes by a PR2 robot. 
    more » « less