skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: War and Peace: Ethical Challenges and Risks in Military Robotics
The United States Department of Defense (DoD) designs, constructs, and deploys social and autonomous robots and robotic weapons systems. Military robots are designed to follow the rules and conduct of the professions or roles they emulate, and it is expected that ethical principles are applied and aligned with such roles. The application of these principles appear paramount during the COVID-19 global pandemic, wherein substitute technologies are crucial in carrying out duties as humans are more restrained due to safety restrictions. This article seeks to examine the ethical implications of the utilization of military robots. The research assesses ethical challenges faced by the United States DoD regarding the use of social and autonomous robots in the military. The authors provide a summary of the current status of these lethal autonomous and social military robots, ethical and moral issues related to their design and deployment, a discussion of policies, and the call for an international discourse on appropriate governance of such systems.  more » « less
Award ID(s):
1912070
PAR ID:
10326934
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Journal of Intelligent Information Technologies
Volume:
17
Issue:
3
ISSN:
1548-3657
Page Range / eLocation ID:
1 to 12
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice. 
    more » « less
  2. Because robots are perceived as moral agents, they must behave in accordance with human systems of morality. This responsibility is especially acute for language-capable robots because moral communication is a method for building moral ecosystems. Language capable robots must not only make sure that what they say adheres to moral norms; they must also actively engage in moral communication to regulate and encourage human compliance with those norms. In this work, we describe four experiments (total N =316) across which we systematically evaluate two different moral communication strategies that robots could use to influence human behavior: a norm-based strategy grounded in deontological ethics, and a role-based strategy grounded in role ethics. Specifically, we assess the effectiveness of robots that use these two strategies to encourage human compliance with norms grounded in expectations of behavior associated with certain social roles. Our results suggest two major findings, demonstrating the importance of moral reflection and moral practice for effective moral communication: First, opportunities for reflection on ethical principles may increase the efficacy of robots’ role-based moral language; and second, following robots’ moral language with opportunities for moral practice may facilitate role-based moral cultivation. 
    more » « less
  3. Trust, dependability, cohesion, and capability are integral to an effective team. These attributes are the same for teams of robots. When multiple teams with competing incentives are tasked, a strategy, if available, may be to weaken, influence or sway the attributes of other teams and limit their understanding of their full range of options. Such strategies are widely found in nature and in sporting contests such as feints, misdirection, etc. This talk focuses on one class of higher-level strategies for multi-robots, i.e., to intentionally misdirect using shills or confederates where needed, and the ethical considerations associated with deploying such teams. As multi-robot systems become more autonomous, distributed, networked, numerous, and with more capability to make critical decisions, the prospect for intentional and unintentional misdirection must be anticipated. While benefits are clearly apparent to the team performing the deception, ethical questions surrounding the use of misdirection or other forms of deception are quite real. 
    more » « less
  4. How do software engineers identify and act on their ethical concerns? Past work examines how software practitioners navigate specific ethical principles such as “fairness”, but this narrows the scope of concerns to implementing pre-specified principles. In contrast, we report self-identified ethical concerns of 115 survey respondents and 21 interviewees across five continents and in non-profit, contractor, and non-tech firms.We enumerate their concerns – military, privacy, advertising, surveillance, and the scope of their concerns – from simple bugs to questioning their industry’s entire existence. We illustrate howattempts to resolve concerns are limited by factors such as personal precarity and organizational incentives. We discuss how even relatively powerful software engineers often lacked the power to resolve their ethical concerns. Our results suggest that ethics interventions must expand from helping practitioners merely identify issues to instead helping them build their (collective) power to resolve them, and that tech ethics discussions may consider broadening beyond foci on AI or Big Tech. 
    more » « less
  5. Child-robot interactions in educational, developmental, and health domains are widely explored, but little is known about how families perceive the presence of a social robot in their home environment and its participation in day-to-day activities. To close this gap, we conducted a participatory design (PD) study with six families, with children aged 10--12, to examine how families perceive in-home social robots participating in shared activities. Our analysis identified three main themes: (1) the robot can have a range of roles in the home as a companion or as an assistant; (2) family members have different preferences for how they would like to interact with the robot in group or personal interactions; and (3) families have privacy, confidentiality, and ethical concerns regarding a social robot's presence in the home. Based on these themes and existing literature, we provide guidelines for the future interaction design of in-home social robots for children. 
    more » « less