skip to main content

Search for: All records

Award ID contains: 1909847

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available January 1, 2023
  2. Robots are entering various domains of human societies, potentially unfolding more opportunities for people to perceive robots as social agents. We expect that having robots in proximity would create unique social learning situations where humans spontaneously observe and imitate robots’ behaviors. At times, these occurrences of humans’ imitating robot behaviors may result in a spread of unsafe or unethical behaviors among humans. For responsible robot designing, therefore, we argue that it is essential to understand physical and psychological triggers of social learning in robot design. Grounded in the existing literature of social learning and the uncanny valley theories, we discuss the human-likeness of robot appearance and affective responses associated with robot appearance as likely factors that either facilitate or deter social learning. We propose practical considerations for social learning and robot design.
  3. We examined how robots can successfully serve as moral advisors for humans. We evaluated the effectiveness of moral advice grounded in deontological, virtue, and Confucian role ethics frameworks in encouraging humans to make honest decisions. Participants were introduced to a tempting situation where extra monetary gain could be earned by choosing to cheat (i.e., violating the norm of honesty). Prior to their decision, a robot encouraged honest choices by offering a piece of moral advice grounded in one of the three ethics frameworks. While the robot’s advice was overall not effective at discouraging dishonest choices, there was preliminary evidence indicating the relative effectiveness of moral advice drawn from deontology. We also explored how different cultural orientations (i.e., vertical and horizontal collectivism and individualism) influence honest decisions across differentially-framed moral advice. We found that individuals with a strong cultural orientation of establishing their own power and status through competition (i.e., high vertical individualism) were more likely to make dishonest choices, especially when moral advice was drawn from virtue ethics. Our findings suggest the importance of considering different ethical frameworks and cultural differences to design robots that can guide humans to comply with the norm of honesty.
  4. Because robots are perceived as moral agents, they hold significant persuasive power over humans. It is thus crucial for robots to behave in accordance with human systems of morality and to use effective strategies for human-robot moral communication. In this work, we evaluate two moral communication strategies: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics, in order to test the effectiveness of these two strategies in encouraging compliance with norms grounded in role expectations. Our results suggest two major findings: (1) reflective exercises may increase the efficacy of role-based moral language and (2) opportunities for moral practice following robots’ use of moral language may facilitate role-centered moral cultivation.
  5. Most previous work on enabling robots’ moral competence has used norm-based systems of moral reasoning. However, a number of limitations to norm-based ethical theories have been widely acknowledged. These limitations may be addressed by role-based ethical theories, which have been extensively discussed in the philosophy of technology literature but have received little attention within robotics. My work proposes a hybrid role/norm-based model of robot cognitive processes including moral cognition.
  6. Due to their unique persuasive power, language-capable robots must be able to both act in line with human moral norms and clearly and appropriately communicate those norms. These requirements are complicated by the possibility that humans may ascribe blame differently to humans and robots. In this work, we explore how robots should communicate in moral advising scenarios, in which the norms they are expected to follow (in a moral dilemma scenario) may be different from those their advisees are expected to follow. Our results suggest that, in fact, both humans and robots are judged more positively when they provide the advice that favors the common good over an individual’s life. These results raise critical new questions regarding people’s moral responses to robots and the design of autonomous moral agents.
  7. We examined whether a robot that proactively offers moral advice promoting the norm of honesty can discourage people from cheating. Participants were presented with an opportunity to cheat in a die-rolling game. Prior to playing the game, participants received from either a NAO robot or a human, a piece of moral advice grounded in either deontological, virtue, or Confucian role ethics, or did not receive any advice. We found that moral advice grounded in Confucian role ethics could reduce cheating when the advice was delivered by a human. No advice was effective when a robot delivered moral advice. These findings highlight challenges in building robots that can possibly guide people to follow moral norms.
  8. Ess, Charles (Ed.)
    Dominant approaches to designing morally capable robots have been mainly based on rule-based ethical frameworks such as deontology and consequentialism. These approaches have encountered both philosophical and computational limitations. They often struggle to accommodate remarkably diverse, unstable, and complex contexts of human-robot interaction. Roboticists and philosophers have recently been exploring underrepresented ethical traditions such as virtuous, role-based, and relational ethical frameworks for designing morally capable robots. This paper employs the lens of ethical pluralism to examine the notion of role-based morality in the global context and discuss how such cross-cultural analysis of role ethics can inform the design of morally competent robots. In doing so, it first provides a concise introduction to ethical pluralism and how it has been employed as a method to interpret issues in computer and information ethics. Second, it reviews specific schools of thought in Western ethics that derive morality from role-based obligations. Third, it presents a more recent effort in Confucianism to reconceptualize Confucian ethics as a role-based ethic. This paper then compares the shared norms and irreducible differences between Western and Eastern approaches to role ethics. Finally, it discusses how such examination of pluralist views of role ethics across cultures can be conducive tomore »the design of morally capable robots sensitive to diverse value systems in the global context.« less
  9. This paper considers the cultivation of ethical identities among future engineers and computer scientists, particularly those whose professional practice will extensively intersect with emerging technologies enabled by artificial intelligence (AI). Many current engineering and computer science students will go on to participate in the development and refinement of AI, machine learning, robotics, and related technologies, thereby helping to shape the future directions of these applications. Researchers have demonstrated the actual and potential deleterious effects that these technologies can have on individuals and communities. Together, these trends present a timely opportunity to steer AI and robotic design in directions that confront, or at least do not extend, patterns of discrimination, marginalization, and exclusion. Examining ethics interventions in AI and robotics education may yield insights into challenges and opportunities for cultivating ethical engineers. We present our ongoing research on engineering ethics education, examine how our work is situated with respect to current AI and robotics applications, and discuss a curricular module in “Robot Ethics” that was designed to achieve interdisciplinary learning objectives. Finally, we offer recommendations for more effective engineering ethics education, with a specific focus on emerging technologies.