skip to main content


Title: Robot-Guided Evacuation as a Paradigm for Human-Robot Interaction Research
This paper conceptualizes the problem of emergency evacuation as a paradigm for investigating human-robot interaction. We argue that emergency evacuation offers unique and important perspectives on human-robot interaction while also demanding close attention to the ethical ramifications of the technologies developed. We present a series of approaches for developing emergency evacuation robots and detail several essential design considerations. This paper concludes with a discussion of the ethical implications of emergency evacuation robots and a roadmap for their development, implementation, and evaluation.  more » « less
Award ID(s):
1830390
NSF-PAR ID:
10339140
Author(s) / Creator(s):
Date Published:
Journal Name:
Frontiers in Robotics and AI
Volume:
8
ISSN:
2296-9144
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social robots are emerging as an important intervention for a variety of vulnerable populations. However, engaging participants in the design of social robots in a way that is ethical, meaningful, and rigorous can be challenging. Many current methods in human–robotic interaction rely on laboratory practices, often experimental, and many times involving deception which could erode trust in vulnerable populations. Therefore, in this paper, we share our human-centered design methodology informed by a participatory approach, drawing on three years of data from a project aimed to design and develop a social robot to improve the mental health of teens. We present three method cases from the project that describe creative and age appropriate methods to gather contextually valid data from a teen population. Specific techniques include design research, scenario and script writing, prototyping, and teens as operators and collaborative actors. In each case, we describe the method and its implementation and discuss the potential strengths and limitations. We conclude by situating these methods by presenting a set of recommended participatory research principles that may be appropriate for designing new technologies with vulnerable populations. 
    more » « less
  2. Ethical decision-making is difficult, certainly for robots let alone humans. If a robot's ethical decision-making process is going to be designed based on some approximation of how humans operate, then the assumption is that a good model of how humans make an ethical choice is readily available. Yet no single ethical framework seems sufficient to capture the diversity of human ethical decision making. Our work seeks to develop the computational underpinnings that will allow a robot to use multiple ethical frameworks that guide it towards doing the right thing. As a step towards this goal, we have collected data investigating how regular adults and ethics experts approach ethical decisions related to the use in a healthcare and game playing scenario. The decisions made by the former group is intended to represent an approximation of a folk morality approach to these dilemmas. On the other hand, experts were asked to judge what decision would result if a person was using one of several different types of ethical frameworks. The resulting data may reveal which features of the pill sorting and game playing scenarios contribute to similarities and differences between expert and non-expert responses. This type of approach to programming a robot may one day be able to rely on specific features of an interaction to determine which ethical framework to use in the robot's decision making. 
    more » « less
  3. As robots are becoming more intelligent and more commonly used, it is critical for robots to behave ethically in human-robot interactions. However, there is a lack of agreement on a correct moral theory to guide human behavior, let alone robots. This paper introduces a robotic architecture that leverages cases drawn from different ethical frameworks to guide the ethical decision-making process and select the appropriate robotic action based on the specific situation. We also present an architecture implementation design used on a pill sorting task for older adults, where the robot needs to decide if it is appropriate to provide false encouragement so that the adults continue to be engaged in the training task. 
    more » « less
  4. This article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment. 
    more » « less
  5. Significant segments of the HRI literature rely on or promote the ability to reason about human identity characteristics, including age, gender, and cultural background. However, attempting to handle identity characteristics raises a number of critical ethical concerns, especially given the spatiotemporal dynamics of these characteristics. In this paper I question whether human identity characteristics can and should be represented, recognized, or reasoned about by robots, with special attention paid to the construct of race, due to its relative lack of consideration within the HRI community. As I will argue, while there are a number of well-warranted reasons why HRI researchers might want to enable robotic consideration of identity characteristics, these reasons are outweighed by a number of key ontological, perceptual, and deployment-oriented concerns. This argument raises troubling questions as to whether robots should even be able to understand or generate descriptions of people, and how they would do so while avoiding these ethical concerns. Finally, I conclude with a discussion of what this means for the HRI community, in terms of both algorithm and robot design, and speculate as to possible paths forward. 
    more » « less