Social robots are becoming increasingly influential in shaping the behavior of humans with whom they interact. Here, we examine how the actions of a social robot can influence human-to-human communication, and not just robot–human communication, using groups of three humans and one robot playing 30 rounds of a collaborative game ( n = 51 groups). We find that people in groups with a robot making vulnerable statements converse substantially more with each other, distribute their conversation somewhat more equally, and perceive their groups more positively compared to control groups with a robot that either makes neutral statements or no statements at the end of each round. Shifts in robot speech have the power not only to affect how people interact with robots, but also how people interact with each other, offering the prospect for modifying social interactions via the introduction of artificial agents into hybrid systems of humans and machines.
more »
« less
Safety Blanket of Humanity: Thinking of Unfamiliar Humans or Robots Increases Conformity to Humans
As robots become prevalent, merely thinking of their existence may affect how people behave. When interacting with a robot, people conformed to the robot’s answers more than to their own initial response [1]. In this study, we examined how robot affect conformity to other humans. We primed participants to think of different experiences: Humans (an experience with a human stranger), Robots (an experience with a robot), or Neutral (daily life). We then measured if participants conformed to other humans in survey answers. Results indicated that people conformed more when thinking of Humans or Robots than of Neutral events. This implies that robots have a similar effect on human conformity to other humans as human strangers do.
more »
« less
- Award ID(s):
- 1849591
- PAR ID:
- 10156076
- Date Published:
- Journal Name:
- Human-Robot Interaction
- Page Range / eLocation ID:
- 197-199
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Mistakes, failures, and transgressions committed by a robot are inevitable as robots become more involved in our society. When a wrong behavior occurs, it is important to understand what factors might affect how the robot is perceived by people. In this paper, we investigated how the type of transgressor (human or robot) and type of backstory depicting the transgressor's mental capabilities (default, physio-emotional, socio-emotional, or cognitive) shaped participants' perceptions of the transgressor's morality. We performed an online, between-subjects study in which participants (N=720) were first introduced to the transgressor and its backstory, and then viewed a video of a real-life robot or human pushing down a human. Although participants attributed similarly high intent to both the robot and the human, the human was generally perceived to have higher morality than the robot. However, the backstory that was told about the transgressors' capabilities affected their perceived morality. We found that robots with emotional backstories (i.e., physio-emotional or socio-emotional) had higher perceived moral knowledge, emotional knowledge, and desire than other robots. We also found that humans with cognitive backstories were perceived with less emotional and moral knowledge than other humans. Our findings have consequences for robot ethics and robot design for HRI.more » « less
-
What Happens When Robots Punish? Evaluating Human Task Performance During Robot-Initiated PunishmentThis article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment.more » « less
-
An important component for the effective collaboration of humans with robots is the compatibility of their movements, especially when humans physically collaborate with a robot partner. Following previous findings that humans interact more seamlessly with a robot that moves with humanlike or biological velocity profiles, this study examined whether humans can adapt to a robot that violates human signatures. The specific focus was on the role of extensive practice and realtime augmented feedback. Six groups of participants physically tracked a robot tracing an ellipse with profiles where velocity scaled with the curvature of the path in biological and nonbiological ways, while instructed to minimize the interaction force with the robot. Three of the 6 groups received real-time visual feedback about their force error. Results showed that with 3 daily practice sessions, when given feedback about their force errors, humans could decrease their interaction forces when the robot’s trajectory violated human-like velocity patterns. Conversely, when augmented feedback was not provided, there were no improvements despite this extensive practice. The biological profile showed no improvements, even with feedback, indicating that the (non-zero) force had already reached a floor level. These findings highlight the importance of biological robot trajectories and augmented feedback to guide humans to adapt to non-biological movements in physical human-robot interaction. These results have implications on various fields of robotics, such as surgical applications and collaborative robots for industry.more » « less
-
Abstract People may experience emotions before interacting with automated agents to seek information and support. However, existing literature has not well examined how human emotional states affect their interaction experience with agents or how automated agents should react to emotions. This study proposes to test how participants perceive an empathetic agent (chatbot) vs. a non-empathetic one under various emotional states (i.e., positive, neutral, negative) when the chatbot mediates the initial screening process for student advising. Participants are prompted to recall a previous emotional experience and have text-based conversations with the chatbot. The study confirms the importance of presenting empathetic cues in the design of automated agents to support human-agent collaboration. Participants who recall a positive experience are more sensitive to the chatbot’s empathetic behavior. The empathetic behavior of the chatbot improves participants’ satisfaction and makes those who recall a neutral experience feel more positive during the interaction. The results reveal that participants’ emotional states are likely to influence their tendency to self-disclose, interaction experience, and perception of the chatbot’s empathetic behavior. The study also highlights the increasing need for emotional acknowledgment of people who experience positive emotions so that design efforts need to be designated according to people’s dynamic emotional states.more » « less