Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.
more »
« less
Consentful-by-design: a perspective on safeguarding data ownership from generative AI leveraging lessons from the healthcare domain
Research on Artificial Intelligence (AI) is lined with moral considerations. In healthcare, a high-risk field, sub-fields have emerged to mitigate AI-specific ethical issues such as fairness and transparency. However, similar considerations remain unaddressed beyond healthcare, and as generative AI tools (‘GenAI’) reach lay audiences, this neglect yields ethical concerns. The present work focuses on learning from ethical considerations in healthcare to mitigate challenges ofGenAI. We structure our proposed mitigation strategies around three of the five established biomedical and AI ethics principles (autonomy, transparency, beneficence), highlighting the risks GenAI poses for intellectual property owners (scraping of copyrighted data, unpaid labor, plagiarism, fraud). We propose concrete ways to affirm these principles on GenAI, using biomedical AI examples and the emerging frameworks they have sparked in domains of data ownership, federated learning, and data provenance. This article comes at a pivotal time for AI, generalizing ethics-aware principles to GenAI to open new research avenues toward responsible AI.
more »
« less
- PAR ID:
- 10673172
- Editor(s):
- Gill, Karamjit
- Publisher / Repository:
- AI & Society: Knowledge, Culture and Communication
- Date Published:
- Journal Name:
- AI & SOCIETY
- ISSN:
- 0951-5666
- Subject(s) / Keyword(s):
- Artificial intelligence Generative AI Intellectual property Data ownership Labor exploitation AI ethics
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement.more » « less
-
Abstract The recent development and use of generative AI (GenAI) has signaled a significant shift in research activities such as brainstorming, proposal writing, dissemination, and even reviewing. This has raised questions about how to balance the seemingly productive uses of GenAI with ethical concerns such as authorship and copyright issues, use of biased training data, lack of transparency, and impact on user privacy. To address these concerns, many Higher Education Institutions (HEIs) have released institutional guidance for researchers. To better understand the guidance that is being provided we report findings from a thematic analysis of guidelines from thirty HEIs in the United States that are classified as R1 or “very high research activity.” We found that guidance provided to researchers: (1) asks them to refer to external sources of information such as funding agencies and publishers to keep updated and use institutional resources for training and education; (2) asks them to understand and learn about specific GenAI attributes that shape research such as predictive modeling, knowledge cutoff date, data provenance, and model limitations, and educate themselves about ethical concerns such as authorship, attribution, privacy, and intellectual property issues; and (3) includes instructions on how to acknowledge sources and disclose the use of GenAI, how to communicate effectively about their GenAI use, and alerts researchers to long term implications such as over reliance on GenAI, legal consequences, and risks to their institutions from GenAI use. Overall, guidance places the onus of compliance on individual researchers making them accountable for any lapses, thereby increasing their responsibility.more » « less
-
Applied machine learning (ML) has not yet coalesced on standard practices for research ethics. For ML that predicts mental illness using social media data, ambiguous ethical standards can impact peoples’ lives because of the area’s sensitivity and material con- sequences on health. Transparency of current ethics practices in research is important to document decision-making and improve research practice. We present a systematic literature review of 129 studies that predict mental illness using social media data and ML, and the ethics disclosures they make in research publications. Rates of disclosure are going up over time, but this trend is slow moving – it will take another eight years for the average paper to have coverage on 75% of studied ethics categories. Certain practices are more readily adopted, or "stickier", over time, though we found pri- oritization of data-driven disclosures rather than human-centered. These inconsistently reported ethical considerations indicate a gap between what ML ethicists believe ought to be and what actually is done. We advocate for closing this gap through increased trans- parency of practice and formal mechanisms to support disclosure.more » « less
-
This column explores the practical considerations and institutional strategies for adopting Generative Artificial Intelligence (GenAI) tools in academic libraries. As higher education institutions increasingly integrate AI into teaching, research, and student support, libraries play a pivotal role in guiding ethical, inclusive, and pedagogically sound implementation. Drawing on case studies from Clemson University, Wake Forest University, and Goldey-Beacom College, the column examines key areas of GenAI integration: contract negotiations, licensing models, trial and pilot program design, data privacy, accessibility, authentication, analytics, training, and ethical use. The article emphasizes the importance of aligning AI adoption with institutional missions, user agency, and evolving frameworks of AI literacy. Recommendations are provided for libraries of all sizes to navigate the dynamic GenAI landscape responsibly and equitably, ensuring that academic integrity and student-centered values remain at the core of AI integration.more » « less
An official website of the United States government

