Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.
more »
« less
Ethics in human–AI teaming: principles and perspectives
Abstract Ethical considerations are the fabric of society, and they foster cooperation, help, and sacrifice for the greater good. Advances in AI create a greater need to examine ethical considerations involving the development and implementation of such systems. Integrating ethics into artificial intelligence-based programs is crucial for preventing negative outcomes, such as privacy breaches and biased decision making. Human–AI teaming (HAIT) presents additional challenges, as the ethical principles and moral theories that provide justification for them are not yet computable by machines. To that effect, models of human judgments and decision making, such as the agent-deed-consequence (ADC) model, will be crucial to inform the ethical guidance functions in AI team mates and to clarify how and why humans (dis)trust machines. The current paper will examine the ADC model as it is applied to the context of HAIT, and the challenges associated with the use of human-centric ethical considerations when applied to an AI context.
more »
« less
- Award ID(s):
- 2043612
- PAR ID:
- 10391629
- Date Published:
- Journal Name:
- AI and Ethics
- ISSN:
- 2730-5953
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Many real-life scenarios require humans to make difficult trade-offs: do we always follow all the traffic rules or do we violate the speed limit in an emergency? In general, how should we account for and balance the ethical values, safety recommendations, and societal norms, when we are trying to achieve a certain objective? To enable effective AI-human collaboration, we must equip AI agents with a model of how humans make such trade-offs in environments where there is not only a goal to be reached, but there are also ethical constraints to be considered and to possibly align with. These ethical constraints could be both deontological rules on actions that should not be performed, or also consequentialist policies that recommend avoiding reaching certain states of the world. Our purpose is to build AI agents that can mimic human behavior in these ethically constrained decision environments, with a long term research goal to use AI to help humans in making better moral judgments and actions. To this end, we propose a computational approach where competing objectives and ethical constraints are orchestrated through a method that leverages a cognitive model of human decision making, called multi-alternative decision field theory (MDFT). Using MDFT, we build an orchestrator, called MDFT-Orchestrator (MDFT-O), that is both general and flexible. We also show experimentally that MDFT-O both generates better decisions than using a heuristic that takes a weighted average of competing policies (WA-O), but also performs better in terms of mimicking human decisions as collected through Amazon Mechanical Turk (AMT). Our methodology is therefore able to faithfully model human decision in ethically constrained decision environments.more » « less
-
The recent surge in artificial intelligence (AI) developments has been met with an increase in attention towards incorporating ethical engagement in machine learning discourse and development. This attention is noticeable within engineering education, where comprehensive ethics curricula are typically absent in engineering programs that train future engineers to develop AI technologies [1]. Artificial intelligence technologies operate as black boxes, presenting both developers and users with a certain level of obscurity concerning their decision-making processes and a diminished potential for negotiating with its outputs [2]. The implementation of collaborative and reflective learning has the potential to engage students with facets of ethical awareness that go along with algorithmic decision making – such as bias, security, transparency and other ethical and moral dilemmas. However, there are few studies that examine how students learn AI ethics in electrical and computer engineering courses. This paper explores the integration of STEMtelling, a pedagogical storytelling method/sensibility, into an undergraduate machine learning course. STEMtelling is a novel approach that invites participants (STEMtellers) to center their own interests and experiences through writing and sharing engineering stories (STEMtells) that are connected to course objectives. Employing a case study approach grounded in activity theory, we explore how students learn ethical awareness that is intrinsic to being an engineer. During the STEMtelling process, STEMtellers blur the boundaries between social and technical knowledge to place themselves at the center of knowledge production. In this WIP, we discuss algorithmic awareness, as one of the themes identified as a practice in developing ethical awareness of AI through STEMtelling. Findings from this study will be incorporated into the development of STEMtelling and address challenges of integrating ethics and the social perception of AI and machine learning courses.more » « less
-
Human-designed systems are increasingly leveraged by data-driven methods and artificial intelligence. This leads to an urgent need for responsible design and ethical use. The goal of this conceptual paper is two-fold. First, we will introduce the Framework for Design Reasoning in Data Life-cycle Ethical Management, which integrates three existing frameworks: 1) the design reasoning quadrants framework (representing engineering design research), and 2) the data life-cycle model (representing data management), and 3) the reflexive principles framework (representing ethical decision-making). The integration of three critical components of the framework (design reasoning, data reasoning, and ethical reasoning) is accomplished by centering on the conscientious negotiation of design risks and benefits. Second, we will present an example of a student design project report to demonstrate how this framework guides educators towards delineating and integrating data reasoning, ethical reasoning, and design reasoning in settings where ethical issues (e.g., AI solutions) are commonly experienced. The framework can be implemented to design courses through design review conversations that seamlessly integrate ethical reasoning into the technical and data decision-making processes.more » « less
-
null (Ed.)Abstract Sensors and control technologies are being deployed at unprecedented levels in both urban and rural water environments. Because sensor networks and control allow for higher-resolution monitoring and decision making in both time and space, greater discretization of control will allow for an unprecedented precision of impacts, both positive and negative. Likewise, humans will continue to cede direct decision-making powers to decision-support technologies, e.g. data algorithms. Systems will have ever-greater potential to effect human lives, and yet, humans will be distanced from decisions. Combined these trends challenge water resources management decision-support tools to incorporate the concepts of ethical and normative expectations. Toward this aim, we propose the Water Ethics Web Engine (WE)2, an integrated and generalized web framework to incorporate voting-based ethical and normative preferences into water resources decision support. We demonstrate this framework with a ‘proof-of-concept’ use case where decision models are learned and deployed to respond to flooding scenarios. Findings indicate that the framework can capture group ‘wisdom’ within learned models to use in decision making. The methodology and ‘proof-of-concept’ system presented here are a step toward building a framework to engage people with algorithmic decision making in cases where ethical preferences are considered. We share our framework and its cyber components openly with the research community.more » « less
An official website of the United States government

