In a technology-centric world, leveraging digital tools such as chatbots allows educators to engage students in ways that may be more accessible for both parties, particularly in large lecture classrooms. This report details the development of an interactive web-based chatbot to curate content for writing about chemistry in context. Students were assigned a 500-word paper where they discuss general chemistry concepts through the lens of a timely, sustainability-related topic, i.e., water footprint, carbon footprint, or embodied carbon. Discussed herein are the development of the decision tree, the chatbot’s components, and results from the initial implementation in a large lecture general chemistry classroom. Over 78% of the 347 enrolled students (271) used the chatbot over 350 times in the 3 weeks leading up to the assigned due date of the paper. Eighty-three percent of the interactions were captured for further analysis, which showed that 22% of students used the chatbot more than once. Forty-six percent of recorded interactions were used to aid students in developing or refining their idea for the assignment. The curated chatbot technology reported here for writing assignments in chemistry can be readily adapted to other aspects of coursework in chemistry.
more »
« less
This content will become publicly available on May 15, 2026
Leveraging Conversational AI for Adolescent Medical Financial Education
Medical financial literacy is essential to make smart decisions in healthcare settings and prevent unanticipated financial hardships. Existing literature has shown that young adults often struggle to understand information associated with health insurance and the financial planning necessary for health-related costs. AI-driven chatbots are emerging as educational tools that have the potential to address this issue. This exploratory study examined an AI chatbot aimed at enhancing medical financial literacy among high school students. Participants engaged with the chatbot’s responses to medical financial questions while also rating the clarity, ease of use, trustworthiness, and educational value of the chatbot engagement. Our experiment results supported that the chatbot increased students’ understanding of the financial aspect of healthcare - 76.9 percent of students reported a high degree of understanding, 80.8 percent rated the chatbot’s responses as clear, and 73.1 percent reported they would recommend it to a peer. The responses indicated that students found the chatbot helpful, but suggested that interactive features be added and/or real-world finance features be incorporated into the chatbot.
more »
« less
- Award ID(s):
- 2153509
- PAR ID:
- 10650515
- Publisher / Repository:
- Conference on Digital Government Research
- Date Published:
- Journal Name:
- Conference on Digital Government Research
- Volume:
- 1
- ISSN:
- 3050-8681
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
There are many initiatives that teach Artificial Intelligence (AI) literacy to K-12 students. Most downsize college-level instructional materials to grade-level appropriate formats, overlooking students' unique perspectives in the design of curricula. To investigate the use of educational games as a vehicle for uncovering youth's understanding of AI instruction, we co-designed games with 39 Black, Hispanic, and Asian high school girls and non-binary youth to create engaging learning materials for their peers. We conducted qualitative analyses on the designed game artifacts, student discourse, and their feedback on the efficacy of learning activities. This study highlights the benefits of co-design and learning games to uncover students' understanding and ability to apply AI concepts in game-based learning, their emergent perspectives of AI, and the prior knowledge that informs their game design choices. Our research uncovers students' AI misconceptions and informs the design of educational games and grade-level appropriate AI instruction.more » « less
-
Background People with low health literacy experience more challenges in understanding instructions given by their health providers, following prescriptions, and understanding their health care system sufficiently to obtain the maximum benefits. People with insufficient health literacy have high risk of making medical mistakes, more chances of experiencing adverse drug effects, and inferior control of chronic diseases. Objective This study aims to design, develop, and evaluate a mobile health app, MediReader, to help individuals better understand complex medical materials and improve their health literacy. Methods MediReader is designed and implemented through several steps, which are as follows: measure and understand an individual’s health literacy level; identify medical terminologies that the individual may not understand based on their health literacy; annotate and interpret the identified medical terminologies tailored to the individual’s reading skill levels, with meanings defined in the appropriate external knowledge sources; evaluate MediReader using task-based user study and satisfaction surveys. Results On the basis of the comparison with a control group, user study results demonstrate that MediReader can improve users’ understanding of medical documents. This improvement is particularly significant for users with low health literacy levels. The satisfaction survey showed that users are satisfied with the tool in general. Conclusions MediReader provides an easy-to-use interface for users to read and understand medical documents. It can effectively identify medical terms that a user may not understand, and then, annotate and interpret them with appropriate meanings using languages that the user can understand. Experimental results demonstrate the feasibility of using this tool to improve an individual’s understanding of medical materials.more » « less
-
Abstract In 2020, the U.S. Department of Defense officially disclosed a set of ethical principles to guide the use of Artificial Intelligence (AI) technologies on future battlefields. Despite stark differences, there are core similarities between the military and medical service. Warriors on battlefields often face life-altering circumstances that require quick decision-making. Medical providers experience similar challenges in a rapidly changing healthcare environment, such as in the emergency department or during surgery treating a life-threatening condition. Generative AI, an emerging technology designed to efficiently generate valuable information, holds great promise. As computing power becomes more accessible and the abundance of health data, such as electronic health records, electrocardiograms, and medical images, increases, it is inevitable that healthcare will be revolutionized by this technology. Recently, generative AI has garnered a lot of attention in the medical research community, leading to debates about its application in the healthcare sector, mainly due to concerns about transparency and related issues. Meanwhile, questions around the potential exacerbation of health disparities due to modeling biases have raised notable ethical concerns regarding the use of this technology in healthcare. However, the ethical principles for generative AI in healthcare have been understudied. As a result, there are no clear solutions to address ethical concerns, and decision-makers often neglect to consider the significance of ethical principles before implementing generative AI in clinical practice. In an attempt to address these issues, we explore ethical principles from the military perspective and propose the “GREAT PLEA” ethical principles, namely Governability, Reliability, Equity, Accountability, Traceability, Privacy, Lawfulness, Empathy, and Autonomy for generative AI in healthcare. Furthermore, we introduce a framework for adopting and expanding these ethical principles in a practical way that has been useful in the military and can be applied to healthcare for generative AI, based on contrasting their ethical concerns and risks. Ultimately, we aim to proactively address the ethical dilemmas and challenges posed by the integration of generative AI into healthcare practice.more » « less
-
Abstract As generative artificial intelligence (AI) becomes increasingly integrated into society and education, more institutions are implementing AI usage policies and offering introductory AI courses. These courses, however, should not replicate the technical focus typically found in introductory computer science (CS) courses like CS1 and CS2. In this paper, we use an adjustable, interdisciplinary socio‐technical AI literacy framework to design and present an introductory AI literacy course. We present a refined version of this framework informed by the teaching of a 1‐credit general education AI literacy course (primarily for freshmen and first‐year students from various majors), a 3‐credit course for CS majors at all levels, and a summer camp for high school students. Drawing from these teaching experiences and the evolving research landscape, we propose an introductory AI literacy course design framework structured around four cross‐cutting pillars. These pillars encompass (1) understanding the scope and technical dimensions of AI technologies, (2) learning how to interact with (generative) AI technologies, (3) applying principles of critical, ethical, and responsible AI usage, and (4) analyzing implications of AI on society. We posit that achieving AI literacy is essential for all students, those pursuing AI‐related careers, and those following other educational or professional paths. This introductory course, positioned at the beginning of a program, creates a foundation for ongoing and advanced AI education. The course design approach is presented as a series of modules and subtopics under each pillar. We emphasize the importance of thoughtful instructional design, including pedagogy, expected learning outcomes, and assessment strategies. This approach not only integrates social and technical learning but also democratizes AI education across diverse student populations and equips all learners with the socio‐technical, multidisciplinary perspectives necessary to navigate and shape the ethical future of AI.more » « less
An official website of the United States government
