skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication
Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.  more » « less
Award ID(s):
1849101
PAR ID:
10139608
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior. 
    more » « less
  2. This article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment. 
    more » « less
  3. Mistakes, failures, and transgressions committed by a robot are inevitable as robots become more involved in our society. When a wrong behavior occurs, it is important to understand what factors might affect how the robot is perceived by people. In this paper, we investigated how the type of transgressor (human or robot) and type of backstory depicting the transgressor's mental capabilities (default, physio-emotional, socio-emotional, or cognitive) shaped participants' perceptions of the transgressor's morality. We performed an online, between-subjects study in which participants (N=720) were first introduced to the transgressor and its backstory, and then viewed a video of a real-life robot or human pushing down a human. Although participants attributed similarly high intent to both the robot and the human, the human was generally perceived to have higher morality than the robot. However, the backstory that was told about the transgressors' capabilities affected their perceived morality. We found that robots with emotional backstories (i.e., physio-emotional or socio-emotional) had higher perceived moral knowledge, emotional knowledge, and desire than other robots. We also found that humans with cognitive backstories were perceived with less emotional and moral knowledge than other humans. Our findings have consequences for robot ethics and robot design for HRI. 
    more » « less
  4. null (Ed.)
    We examined whether a robot that proactively offers moral advice promoting the norm of honesty can discourage people from cheating. Participants were presented with an opportunity to cheat in a die-rolling game. Prior to playing the game, participants received from either a NAO robot or a human, a piece of moral advice grounded in either deontological, virtue, or Confucian role ethics, or did not receive any advice. We found that moral advice grounded in Confucian role ethics could reduce cheating when the advice was delivered by a human. No advice was effective when a robot delivered moral advice. These findings highlight challenges in building robots that can possibly guide people to follow moral norms. 
    more » « less
  5. null (Ed.)
    We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty. 
    more » « less