Responsible AI (RAI) is the science and practice of ensuring the design, development, use, and oversight of AI are socially sustainable---benefiting diverse stakeholders while controlling the risks. Achieving this goal requires active engagement and participation from the broader public. This paper introduces We are AI: Taking Control of Technology, a public education course that brings the topics of AI and RAI to the general audience in a peer-learning setting. We outline the goals behind the course's development, discuss the multi-year iterative process that shaped its creation, and summarize its content. We also discuss two offerings of We are AI to an active and engaged group of librarians and professional staff at New York University, highlighting successes and areas for improvement. The course materials, including a multilingual comic book series by the same name, are publicly available and can be used independently. By sharing our experience in creating and teaching We are AI, we aim to introduce these resources to the community of AI educators, researchers, and practitioners, supporting their public education efforts.
more »
« less
This content will become publicly available on January 1, 2026
AI Can Be a Powerful Social Innovation for Public Health if Community Engagement Is at the Core
There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences.
more »
« less
- Award ID(s):
- 2339880
- PAR ID:
- 10579705
- Publisher / Repository:
- Journal of Medical Internet Research
- Date Published:
- Journal Name:
- Journal of Medical Internet Research
- Volume:
- 27
- ISSN:
- 1438-8871
- Page Range / eLocation ID:
- e68198
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper addresses two policy questions. Firstly, how might evolving technologies associated with broadband networks enhance or hinder marginalized or underserved population groups’ effective use and access to information resources? Secondly, how can we foster public hybrid broadband, situating broadband networks within existing communities, as a means to promote digital self-determination? This study finds that wireless mesh technology initiatives can create and foster community engagement through infrastructure deployment, maintenance and use; combat myths regarding marginalized demographics and technology, and provide marginalized communities with an opportunity to become decision-makers regarding communications technology infrastructure development.more » « less
-
Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.more » « less
-
Opening a conversation on responsible environmental data science in the age of large language modelsAbstract The general public and scientific community alike are abuzz over the release of ChatGPT and GPT-4. Among many concerns being raised about the emergence and widespread use of tools based on large language models (LLMs) is the potential for them to propagate biases and inequities. We hope to open a conversation within the environmental data science community to encourage the circumspect and responsible use of LLMs. Here, we pose a series of questions aimed at fostering discussion and initiating a larger dialogue. To improve literacy on these tools, we provide background information on the LLMs that underpin tools like ChatGPT. We identify key areas in research and teaching in environmental data science where these tools may be applied, and discuss limitations to their use and points of concern. We also discuss ethical considerations surrounding the use of LLMs to ensure that as environmental data scientists, researchers, and instructors, we can make well-considered and informed choices about engagement with these tools. Our goal is to spark forward-looking discussion and research on how as a community we can responsibly integrate generative AI technologies into our work.more » « less
-
Background: Characterizing principles of co-learning and stakeholder engagement for community-engaged research is becoming increasingly important. As low-income communities, Indigenous communities, and communities of color all over the world disproportionately feel the social, health, and economic impacts of environmental hazards, especially climate change, it is imperative to co-learn with these communities, so their lived experience and knowledge guide the building and sharing of a knowledge base and the development of equitable solutions. Objectives: This paper presents recent theoretical and practical support for the development of co-learning principles to guide climate adaptation and health equity innovations. We describe this development process, which included both a literature review and stakeholder engagement. The process and the resultant set of principles are relevant to community health partnerships. Adopting principles to guide design, development, and implementation prior to commencement of community health projects will help to ensure they are nonextractive and achieve maximum benefits for beneficiaries. Methods: A multiuniversity research team adopted this approach at the outset of a research endeavor in 2022. The team is currently conducting principle-based field research in non-U.S. locations where climate hazards and structural inequities have created health disparities. Conclusions: The team’s advisory board and its funder expressed enthusiasm about the development of these principles and about the prospect of Western researchers conducting a project in a way that values Indigenous and traditional communities as partners and knowledge-holders and has the potential to bring benefits to the communities involved, including increased capacity for activities promoting health, equity, and well-being.more » « less