skip to main content


Title: Human Trust After Robot Mistakes: Study of the Effects of Different Forms of Robot Communication
Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.  more » « less
Award ID(s):
1849101
NSF-PAR ID:
10139608
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
2019 28th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Page Range / eLocation ID:
1 to 7
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment. 
    more » « less
  2. Robots are increasingly being employed for diverse applications where they must work and coexist with humans. The trust in human–robot collaboration (HRC) is a critical aspect of any shared-task performance for both the human and the robot. The study of a human-trusting robot has been investigated by numerous researchers. However, a robot-trusting human, which is also a significant issue in HRC, is seldom explored in the field of robotics. Motivated by this gap, we propose a novel trust-assist framework for human–robot co-carry tasks in this study. This framework allows the robot to determine a trust level for its human co-carry partner. The calculations of this trust level are based on human motions, past interactions between the human–robot pair, and the human’s current performance in the co-carry task. The trust level between the human and the robot is evaluated dynamically throughout the collaborative task, and this allows the trust to change if the human performs false positive actions, which can help the robot avoid making unpredictable movements and causing injury to the human. Additionally, the proposed framework can enable the robot to generate and perform assisting movements to follow human-carrying motions and paces when the human is considered trustworthy in the co-carry task. The results of our experiments suggest that the robot effectively assists the human in real-world collaborative tasks through the proposed trust-assist framework. 
    more » « less
  3. null (Ed.)
    We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty. 
    more » « less
  4. As the influence of social robots in people’s daily lives grows, research on understanding people’s perception of robots including sociability, trust, acceptance, and preference becomes more pervasive. Research has considered visual, vocal, or tactile cues to express robots’ emotions, whereas little research has provided a holistic view in examining the interactions among different factors influencing emotion perception. We investigated multiple facets of user perception on robots during a conversational task by varying the robots’ voice types, appearances, and emotions. In our experiment, 20 participants interacted with two robots having four different voice types. While participants were reading fairy tales to the robot, the robot gave vocal feedback with seven emotions and the participants evaluated the robot’s profiles through post surveys. The results indicate that (1) the accuracy of emotion perception differed depending on presented emotions, (2) a regular human voice showed higher user preferences and naturalness, (3) but a characterized voice was more appropriate for expressing emotions with significantly higher accuracy in emotion perception, and (4) participants showed significantly higher emotion recognition accuracy with the animal robot than the humanoid robot. A follow-up study ([Formula: see text]) with voice-only conditions confirmed that the importance of embodiment. The results from this study could provide the guidelines needed to design social robots that consider emotional aspects in conversations between robots and users. 
    more » « less
  5. Background: The increasing prevalence of robots in industrial environments is attributed in part to advancements in collaborative robot technologies, enabling robots to work in close proximity to humans. Simultaneously, the rise of teleoperation, involving remote robot control, poses unique opportunities and challenges for human-robot collaboration (HRC) in diverse and distributed workspaces. Purpose: There is not yet a comprehensive understanding of HRC in teleoperation, specifically focusing on collaborations involving the teleoperator, the robot, and the local or onsite workers in industrial settings, here referred to as teleoperator-robot-human collaboration (tRHC). We aimed to identify opportunities, challenges, and potential applications of tRHC through insights provided from industry stakeholders, thereby supporting effective future industrial implementations. Methods: Thirteen stakeholders in robotics, specializing in different domains (i.e., safety, robot manufacturing, aerospace/automotive manufacturing, and supply chains), completed semi-structured interviews that focused on exploring diverse aspects relevant to tRHC. The interviews were then transcribed and thematic analysis was applied to group responses into broader categories, which were further compared across stakeholder industries. Results We identified three main categories and 13 themes from the interviews. These categories include Benefits, Concerns, and Technical Challenges. Interviewees highlighted accessibility, ergonomics, flexibility, safety, time & cost saving, and trust as benefits of tRHC. Concerns raised encompassed safety, standards, trust, and workplace optimization. Technical challenges consisted of critical issues such as communication time delays, the need for high dexterity in robot manipulators, the importance of establishing shared situational awareness among all agents, and the potential of augmented and virtual reality in providing immersive control interfaces. Conclusions: Despite important challenges, tRHC could offer unique benefits, facilitating seamless collaboration among the teleoperator, teleoperated robot(s), and onsite workers across physical and geographic boundaries. To realize such benefits and address the challenges, we propose several research directions to further explore and develop tRHC capabilities. 
    more » « less