Punishment can serve as a form of communication: People use punishment to express information to its recipients and interpret punishment between third parties as having communicative content. Prior work on the expressive function of punishment has primarily investigated the capacity of punishment in general to communicate a single type of message – e.g., that the punished behavior violated an important norm. The present work expands this framework by testing whether different types of punishment communicate different messages. We distinguish between person-oriented punishments, which seek to harm the recipient, and action-oriented punishments, which seek to undo a harmful action. We show that people interpret action-oriented punishments, compared to person-oriented punishments, to indicate that the recipient will change for the better (Study 1). The communicative theory can explain this finding if people understand action-oriented punishment to send a message that is more effective than person-oriented punishment at causing such a change. Supporting this explanation, inferences about future behavior track the recipients' beliefs about the punishment they received, rather than the punisher's intentions or the actual punishment imposed (Study 2). Indeed, when actual recipients of a person-oriented punishment believed they received an action-oriented punishment and vice versa, predictions of future behavior tracked the recipients' beliefs rather than reality, and judgments about what the recipients learned from the punishments mediated this effect (Study 3). Together, these studies demonstrate that laypeople think different types of punishment send different messages to recipients and that these messages are differentially effective at bringing about behavioral changes.
more »
« less
What Happens When Robots Punish? Evaluating Human Task Performance During Robot-Initiated Punishment
This article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment.
more »
« less
- Award ID(s):
- 1849068
- PAR ID:
- 10309910
- Date Published:
- Journal Name:
- ACM Transactions on Human-Robot Interaction
- Volume:
- 10
- Issue:
- 4
- ISSN:
- 2573-9522
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Regular exercise provides many mental and physical health benefits. However, when exercises are done incorrectly, it can lead to injuries. Because the COVID-19 pandemic made it challenging to exercise in communal spaces, the growth of virtual fitness programs was accelerated, putting people at risk of sustaining exercise-related injuries as they received little to no feedback on their exercising techniques. Colocated robots could be one potential enhancement to virtual training programs as they can cause higher learning gains, more compliance, and more enjoyment than non-co-located robots. In this study, we compare the effects of a physically present robot by having a person exercise either with a robot (robot condition) or a video of a robot displayed on a tablet (tablet condition). Participants (N=25) had an exercise system in their homes for two weeks. Participants who exercised with the colocated robot made fewer mistakes than those who exercised with the video-displayed robot. Furthermore, participants in the robot condition reported a higher fitness increase and more motivation to exercise than participants in the tablet condition.more » « less
-
Abstract Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.more » « less
-
Mistakes, failures, and transgressions committed by a robot are inevitable as robots become more involved in our society. When a wrong behavior occurs, it is important to understand what factors might affect how the robot is perceived by people. In this paper, we investigated how the type of transgressor (human or robot) and type of backstory depicting the transgressor's mental capabilities (default, physio-emotional, socio-emotional, or cognitive) shaped participants' perceptions of the transgressor's morality. We performed an online, between-subjects study in which participants (N=720) were first introduced to the transgressor and its backstory, and then viewed a video of a real-life robot or human pushing down a human. Although participants attributed similarly high intent to both the robot and the human, the human was generally perceived to have higher morality than the robot. However, the backstory that was told about the transgressors' capabilities affected their perceived morality. We found that robots with emotional backstories (i.e., physio-emotional or socio-emotional) had higher perceived moral knowledge, emotional knowledge, and desire than other robots. We also found that humans with cognitive backstories were perceived with less emotional and moral knowledge than other humans. Our findings have consequences for robot ethics and robot design for HRI.more » « less
-
null (Ed.)We examined whether a robot that proactively offers moral advice promoting the norm of honesty can discourage people from cheating. Participants were presented with an opportunity to cheat in a die-rolling game. Prior to playing the game, participants received from either a NAO robot or a human, a piece of moral advice grounded in either deontological, virtue, or Confucian role ethics, or did not receive any advice. We found that moral advice grounded in Confucian role ethics could reduce cheating when the advice was delivered by a human. No advice was effective when a robot delivered moral advice. These findings highlight challenges in building robots that can possibly guide people to follow moral norms.more » « less