Artificial Intelligence (AI) is a transformative force in communication and messaging strategy, with potential to disrupt traditional approaches. Large language models (LLMs), a form of AI, are capable of generating high-quality, humanlike text. We investigate the persuasive quality of AI-generated messages to understand how AI could impact public health messaging. Specifically, through a series of studies designed to characterize and evaluate generative AI in developing public health messages, we analyze COVID-19 pro-vaccination messages generated by GPT-3, a state-of-the-art instantiation of a large language model. Study 1 is a systematic evaluation of GPT-3's ability to generate pro-vaccination messages. Study 2 then observed peoples' perceptions of curated GPT-3-generated messages compared to human-authored messages released by the CDC (Centers for Disease Control and Prevention), finding that GPT-3 messages were perceived as more effective, stronger arguments, and evoked more positive attitudes than CDC messages. Finally, Study 3 assessed the role of source labels on perceived quality, finding that while participants preferred AI-generated messages, they expressed dispreference for messages that were labeled as AI-generated. The results suggest that, with human supervision, AI can be used to create effective public health messages, but that individuals prefer their public health messages to come from human institutions rather than AI sources. We propose best practices for assessing generative outputs of large language models in future social science research and ways health professionals can use AI systems to augment public health messaging.
more »
« less
AI for Climate Justice: Assessing Large Language Models from an Intersectional Lens
Generative Artificial Intelligence (AI) tools, including language models that can create novel content, have shown promise for science communication at scale. However, these tools may show inaccuracies and biases, particularly against marginalized populations. In this work, we examined the potential of GPT-4, a generative AI model, in creating content for climate science communication. We analyzed 100 messages generated by GPT-4 that included descriptors of intersecting identities, science communication mediums, locations, and climate justice issues. Our analyses revealed that community awareness and actions emerged as prominent themes in the generated messages, while systemic critiques of climate justice issues were not as present. We discuss how intersectional lenses can help researchers to examine the underlying assumptions of emerging technology before its integration into learning contexts.
more »
« less
- Award ID(s):
- 2241596
- PAR ID:
- 10526138
- Publisher / Repository:
- International Society of the Learning Sciences
- Date Published:
- Page Range / eLocation ID:
- 1055 to 1058
- Format(s):
- Medium: X
- Location:
- https://repository.isls.org/handle/1/10617
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In the face of climate change, climate literacy is becoming increasingly important. With wide access to generative AI tools, such as OpenAI’s ChatGPT, we explore the potential of AI platforms for ordinary citizens asking climate literacy questions. Here, we focus on a global scale and collect responses from ChatGPT (GPT-3.5 and GPT-4) on climate change-related hazard prompts over multiple iterations by utilizing the OpenAI’s API and comparing the results with credible hazard risk indices. We find a general sense of agreement in comparisons and consistency in ChatGPT over the iterations. GPT-4 displayed fewer errors than GPT-3.5. Generative AI tools may be used in climate literacy, a timely topic of importance, but must be scrutinized for potential biases and inaccuracies moving forward and considered in a social context. Future work should identify and disseminate best practices for optimal use across various generative AI tools.more » « less
-
The recent public releases of AI tools such as ChatGPT have forced computer science educators to reconsider how they teach. These tools have demonstrated considerable ability to generate code and answer conceptual questions, rendering them incredibly useful for completing CS coursework. While overreliance on AI tools could hinder students’ learning, we believe they have the potential to be a helpful resource for both students and instructors alike. We propose a novel system for instructor-mediated GPT interaction in a class discussion board. By automatically generating draft responses to student forum posts, GPT can help Teaching Assistants (TAs) respond to student questions in a more timely manner, giving students an avenue to receive fast, quality feedback on their solutions without turning to ChatGPT directly. Additionally, since they are involved in the process, instructors can ensure that the information students receive is accurate, and can provide students with incremental hints that encourage them to engage critically with the material, rather than just copying an AI-generated snippet of code. We utilize Piazza—a popular educational forum where TAs help students via text exchanges—as a venue for GPT-assisted TA responses to student questions. These student questions are sent to GPT-4 alongside assignment instructions and a customizable prompt, both of which are stored in editable instructor-only Piazza posts. We demonstrate an initial implementation of this system, and provide examples of student questions that highlight its benefits.more » « less
-
Opening a conversation on responsible environmental data science in the age of large language modelsAbstract The general public and scientific community alike are abuzz over the release of ChatGPT and GPT-4. Among many concerns being raised about the emergence and widespread use of tools based on large language models (LLMs) is the potential for them to propagate biases and inequities. We hope to open a conversation within the environmental data science community to encourage the circumspect and responsible use of LLMs. Here, we pose a series of questions aimed at fostering discussion and initiating a larger dialogue. To improve literacy on these tools, we provide background information on the LLMs that underpin tools like ChatGPT. We identify key areas in research and teaching in environmental data science where these tools may be applied, and discuss limitations to their use and points of concern. We also discuss ethical considerations surrounding the use of LLMs to ensure that as environmental data scientists, researchers, and instructors, we can make well-considered and informed choices about engagement with these tools. Our goal is to spark forward-looking discussion and research on how as a community we can responsibly integrate generative AI technologies into our work.more » « less
-
Justice-centred science pedagogy has been suggested as an effective framework for supporting teachers in bringing in culturally relevant pedagogy to their science classrooms; however, limited instructional tools exist that introduce social dimensions of science in ways teachers feel confident navigating. In this article, we add to the justice-centred science pedagogy framework by offering tools to make sense of science and social factors and introduce socioscientific modelling as an instructional strategy for attending to social dimensions of science in ways that align with justice-centred science pedagogy. Socioscientific modelling offers an inclusive, culturally responsive approach to education in science, technology, engineering, the arts and mathematics through welcoming students’ diverse repertoires of personal and community knowledge and linking disciplinary knowledge with social dimensions. In this way, students can come to view content knowledge as a tool for making sense of inequitable systems and societal injustices. Using data from an exploratory study conducted in summer 2022, we present emerging evidence of how this type of modelling has shown students to demonstrate profound insight into social justice science issues, construct understandings that are personally meaningful and engage in sophisticated reasoning. We conclude with future considerations for the field.more » « less
An official website of the United States government

