Responsible AI (RAI) is the science and practice of ensuring the design, development, use, and oversight of AI are socially sustainable---benefiting diverse stakeholders while controlling the risks. Achieving this goal requires active engagement and participation from the broader public. This paper introduces We are AI: Taking Control of Technology, a public education course that brings the topics of AI and RAI to the general audience in a peer-learning setting. We outline the goals behind the course's development, discuss the multi-year iterative process that shaped its creation, and summarize its content. We also discuss two offerings of We are AI to an active and engaged group of librarians and professional staff at New York University, highlighting successes and areas for improvement. The course materials, including a multilingual comic book series by the same name, are publicly available and can be used independently. By sharing our experience in creating and teaching We are AI, we aim to introduce these resources to the community of AI educators, researchers, and practitioners, supporting their public education efforts.
more »
« less
AI Can Be a Powerful Social Innovation for Public Health if Community Engagement Is at the Core
There is a critical need for community engagement in the process of adopting artificial intelligence (AI) technologies in public health. Public health practitioners and researchers have historically innovated in areas like vaccination and sanitation but have been slower in adopting emerging technologies such as generative AI. However, with increasingly complex funding, programming, and research requirements, the field now faces a pivotal moment to enhance its agility and responsiveness to evolving health challenges. Participatory methods and community engagement are key components of many current public health programs and research. The field of public health is well positioned to ensure community engagement is part of AI technologies applied to population health issues. Without such engagement, the adoption of these technologies in public health may exclude significant portions of the population, particularly those with the fewest resources, with the potential to exacerbate health inequities. Risks to privacy and perpetuation of bias are more likely to be avoided if AI technologies in public health are designed with knowledge of community engagement, existing health disparities, and strategies for improving equity. This viewpoint proposes a multifaceted approach to ensure safer and more effective integration of AI in public health with the following call to action: (1) include the basics of AI technology in public health training and professional development; (2) use a community engagement approach to co-design AI technologies in public health; and (3) introduce governance and best practice mechanisms that can guide the use of AI in public health to prevent or mitigate potential harms. These actions will support the application of AI to varied public health domains through a framework for more transparent, responsive, and equitable use of this evolving technology, augmenting the work of public health practitioners and researchers to improve health outcomes while minimizing risks and unintended consequences.
more »
« less
- Award ID(s):
- 2339880
- PAR ID:
- 10579705
- Publisher / Repository:
- Journal of Medical Internet Research
- Date Published:
- Journal Name:
- Journal of Medical Internet Research
- Volume:
- 27
- ISSN:
- 1438-8871
- Page Range / eLocation ID:
- e68198
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper addresses two policy questions. Firstly, how might evolving technologies associated with broadband networks enhance or hinder marginalized or underserved population groups’ effective use and access to information resources? Secondly, how can we foster public hybrid broadband, situating broadband networks within existing communities, as a means to promote digital self-determination? This study finds that wireless mesh technology initiatives can create and foster community engagement through infrastructure deployment, maintenance and use; combat myths regarding marginalized demographics and technology, and provide marginalized communities with an opportunity to become decision-makers regarding communications technology infrastructure development.more » « less
-
Opening a conversation on responsible environmental data science in the age of large language modelsAbstract The general public and scientific community alike are abuzz over the release of ChatGPT and GPT-4. Among many concerns being raised about the emergence and widespread use of tools based on large language models (LLMs) is the potential for them to propagate biases and inequities. We hope to open a conversation within the environmental data science community to encourage the circumspect and responsible use of LLMs. Here, we pose a series of questions aimed at fostering discussion and initiating a larger dialogue. To improve literacy on these tools, we provide background information on the LLMs that underpin tools like ChatGPT. We identify key areas in research and teaching in environmental data science where these tools may be applied, and discuss limitations to their use and points of concern. We also discuss ethical considerations surrounding the use of LLMs to ensure that as environmental data scientists, researchers, and instructors, we can make well-considered and informed choices about engagement with these tools. Our goal is to spark forward-looking discussion and research on how as a community we can responsibly integrate generative AI technologies into our work.more » « less
-
Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.more » « less
-
Abstract BackgroundIn recent years public health research has shifted to more strengths or asset-based approaches to health research but there is little understanding of what this concept means to Indigenous researchers. Therefore our purpose was to define an Indigenous strengths-based approach to health and well-being research. MethodsUsing Group Concept Mapping, Indigenous health researchers (N = 27) participated in three-phases. Phase 1: Participants provided 218 unique responses to the focus prompt “Indigenous Strengths-Based Health and Wellness Research…” Redundancies and irrelevant statements were removed using content analysis, resulting in a final set of 94 statements. Phase 2: Participants sorted statements into groupings and named these groupings. Participants rated each statement based on importance using a 4-point scale. Hierarchical cluster analysis was used to create clusters based on how statements were grouped by participants. Phase 3: Two virtual meetings were held to share and invite researchers to collaboratively interpret results. ResultsA six-cluster map representing the meaning of Indigenous strengths-based health and wellness research was created. Results of mean rating analysis showed all six clusters were rated on average as moderately important. ConclusionsThe definition of Indigenous strengths-based health research, created through collaboration with leading AI/AN health researchers, centers Indigenous knowledges and cultures while shifting the research narrative from one of illness to one of flourishing and relationality. This framework offers actionable steps to researchers, public health practitioners, funders, and institutions to promote relational, strengths-based research that has the potential to promote Indigenous health and wellness at individual, family, community, and population levels.more » « less
An official website of the United States government

