skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Modeling Human Temporal Uncertainty in Human-Agent Teams
Automated scheduling is potentially a very useful tool for facilitating efficient, intuitive interactions between a robot and a human teammate. However, a current gap in automated scheduling is that it is not well understood how to best represent the timing uncertainty that human teammates introduce. This paper attempts to address this gap by designing an online human-robot collaborative packaging game that we use to build a model of human timing uncertainty from a population of crowdworkers. We conclude that heavy-tailed distributions are the best models of human temporal uncertainty, with a Log-Normal distribution achieving the best fit to our experimental data. We discuss how these results along with our collaborative online game will inform and facilitate future explorations into scheduling for improved human-robot fluency.  more » « less
Award ID(s):
1651822
PAR ID:
10220624
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the AI-HRI Symposium at AAAI-FSS 2020
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Fluency---described as the ``coordinated meshing of joint activities between members of a well-synchronized team''---is essential to human-robot team success. Human teams achieve fluency through rich, often mostly implicit, communication. A key challenge in bridging the gap between industry and academia is understanding what influences human perception of a fluent team experience to better optimize human-robot fluency in industrial environments. This paper addresses this challenge by developing an online experiment featuring videos that vary the timing of human and robot actions to influence perceived team fluency. Our results support three broad conclusions. First, we did not see differences across most subjective fluency measures. Second, people report interactions as more fluent as teammates stay more active. Third, reducing delays when humans' tasks depend on robots increases perceived team fluency. 
    more » « less
  2. Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game ( n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines. 
    more » « less
  3. Virtual reality (VR) offers potential as a prototyping tool for human-robot interaction. We explored a way to utilize human-centered design (HCD) methodology to develop a collaborative VR game for understanding teens’ perceptions of, and interactions with, social robots. Our paper features three stages of the design process for teen-robot interaction in VR; ideation, prototyping, and game development. In the ideation stage, we identified three important design principles: collaboration, customization, and robot characterization. In the prototyping stage, we developed a card game, conducted gameplay, and confirmed our design principles. Finally, we developed a low-fidelity VR game and received teens’ feedback. This exploratory study highlights the potential of VR, both for collaborative robot design and teen-robot interaction studies. 
    more » « less
  4. Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework. 
    more » « less
  5. Robots can influence people to accomplish their tasks more efficiently: autonomous cars can inch forward at an intersection to pass through, and tabletop manipulators can go for an object on the table first. However, a robot's ability to influence can also compromise the physical safety of nearby people if naively executed. In this work, we pose and solve a novel robust reach-avoid dynamic game which enables robots to be maximally influential, but only when a safety backup control exists. On the human side, we model the human's behavior as goal-driven but conditioned on the robot's plan, enabling us to capture influence. On the robot side, we solve the dynamic game in the joint physical and belief space, enabling the robot to reason about how its uncertainty in human behavior will evolve over time. We instantiate our method, called SLIDE (Safely Leveraging Influence in Dynamic Environments), in a high-dimensional (39-D) simulated human-robot collaborative manipulation task solved via offline game-theoretic reinforcement learning. We compare our approach to a robust baseline that treats the human as a worst-case adversary, a safety controller that does not explicitly reason about influence, and an energy-function-based safety shield. We find that SLIDE consistently enables the robot to leverage the influence it has on the human when it is safe to do so, ultimately allowing the robot to be less conservative while still ensuring a high safety rate during task execution. Project website: https://cmu-intentlab.github.io/safe-influence/ 
    more » « less