The spread of infectious diseases is a highly complex spatiotemporal process, difficult to understand, predict, and effectively respond to. Machine learning and artificial intelligence (AI) have achieved impressive results in other learning and prediction tasks; however, while many AI solutions are developed for disease prediction, only a few of them are adopted by decision-makers to support policy interventions. Among several issues preventing their uptake, AI methods are known to amplify the bias in the data they are trained on. This is especially problematic for infectious disease models that typically leverage large, open, and inherently biased spatiotemporal data. These biases may propagate through the modeling pipeline to decision-making, resulting in inequitable policy interventions. Therefore, there is a need to gain an understanding of how the AI disease modeling pipeline can mitigate biased input data, in-processing models, and biased outputs. Specifically, our vision is to develop a large-scale micro-simulation of individuals from which human mobility, population, and disease ground-truth data can be obtained. From this complete dataset—which may not reflect the real world—we can sample and inject different types of bias. By using the sampled data in which bias is known (as it is given as the simulation parameter), we can explore how existing solutions for fairness in AI can mitigate and correct these biases and investigate novel AI fairness solutions. Achieving this vision would result in improved trust in such models for informing fair and equitable policy interventions.
more »
« less
Artificial Intelligence for End Tidal Capnography Guided Resuscitation: A Conceptual Framework
Artificial Intelligence (AI) and machine learning have advanced healthcare by defining relationships in complex conditions. Out-of-hospital cardiac arrest (OHCA) is a medically complex condition with several etiologies. Survival for OHCA has remained static at 10% for decades in the United States. Treatment of OHCA requires the coordination of numerous interventions, including the delivery of multiple medications. Current resuscitation algorithms follow a single strict pathway, regardless of fluctuating cardiac physiology. OHCA resuscitation requires a real-time biomarker that can guide interventions to improve outcomes. End tidal capnography (ETCO2) is commonly implemented by emergency medical services professionals in resuscitation and can serve as an ideal biomarker for resuscitation. However, there are no effective conceptual frameworks utilizing the continuous ETCO2 data. In this manuscript, we detail a conceptual framework using AI and machine learning techniques to leverage ETCO2 in guided resuscitation.
more »
« less
- Award ID(s):
- 2037398
- PAR ID:
- 10491302
- Publisher / Repository:
- HICSS Conference Office University of Hawaii at Manoa
- Date Published:
- Journal Name:
- Proceedings of the 57th Hawaii International Conference on System Sciences
- ISSN:
- 2572-6862
- Subject(s) / Keyword(s):
- Artificial intelligence, cardiac arrest, resuscitation, end tidal capnography, reinforcement learning
- Format(s):
- Medium: X
- Location:
- Honolulu HI
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Urban mobility is a critical contributor to greenhouse gas emissions, accounting for over 30% of urban carbon emissions in the United States in 2021. Addressing this challenge requires a comprehensive and data-driven approach to transform transportation systems into sustainable networks. This paper presents an integrated framework that leverages artificial intelligence (AI), machine learning (ML), and life cycle assessment (LCA) to analyze, model, and optimize urban mobility. The framework consists of four key components: AI-powered analysis and models, synthetic urban mobility data generation, LCA for environmental footprint analysis, and data-driven policy interventions. By combining these elements, the framework not only deciphers complex mobility patterns but also quantifies their environmental impacts, providing actionable insights for policy decisions aimed at reducing carbon emissions and promoting sustainable urban transportation. The implications of this approach extend beyond individual cities, offering a blueprint for global sustainable urban mobility.more » « less
-
This study introduces AutoCLC, an AI-powered system designed to assess and provide feedback on closed-loop communication (CLC) in professional learning environments. CLC, where a sender’s Call-Out statement is acknowledged by the receiver’s Check-Back statement, is a critical safety protocol in high-reliability domains, including emergency medicine resuscitation teams. Existing methods for evaluating CLC lack quantifiable metrics and depend heavily on human observation. AutoCLC addresses these limitations by leveraging natural language processing and large language models to analyze audio recordings from Advanced Cardiovascular Life Support (ACLS) simulation training. The system identifies CLC instances, measures their frequency and rate per minute, and categorizes communications as effective, incomplete, or missed. Technical evaluations demonstrate AutoCLC achieves 78.9% precision for identifying Call-Outs and 74.3% for Check-Backs, with a performance gap of only 5% compared to human annotations. A user study involving 11 cardiac arrest instructors across three training sites supported the need for automated CLC assessment. Instructors found AutoCLC reports valuable for quantifying CLC frequency and quality, as well as for providing actionable, example-based feedback. Participants rated AutoCLC highly, with a System Usability Scale score of 76.4%, reflecting above-average usability. This work represents a significant step toward developing scalable, data-driven feedback systems that enhance individual skills and team performance in high-reliability settings.more » « less
-
AbstractRecent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross adifficulty threshold,and (2) thepoint of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notesWhat is already known about this topicCollaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains.Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments.Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI.What this paper addsWe extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration.We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom.From our discussions with the teacher, we define two types ofinflection pointsto address students' STEM+C difficulties—thedifficulty thresholdand theintervention point—and discuss how thefeedback latency intervalseparating them can inform educator interventions.We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline.Implications for practice and/or policyOur case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback.Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment.Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.more » « less
-
Fostering young learners’ literacy surrounding AI technologies is becoming increasingly important as AI is becoming integrated in many aspects of our lives and is having far-reaching impacts on society. We have developed Knowledge Net and Creature Features, two activity boxes for family groups to engage with in their homes that communicate AI literacy competencies such as understanding knowledge representations, the steps of machine learning, and AI ethics. Our current work is exploring how to transform these activity boxes into museum exhibits for middle-school age learners, focusing on three key considerations: centering learner interests, generating personally meaningful outputs, and incorporating embodiment and collaboration on a larger scale. Our demonstration will feature the existing Knowledge Net and Creature Features activity boxes alongside early-stage prototypes adapting these activities into larger-scale museum exhibits. This paper contributes an exploration into how to design AI literacy learning interventions for varied informal learning contexts.more » « less
An official website of the United States government

