AbstractRecent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross adifficulty threshold,and (2) thepoint of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notesWhat is already known about this topicCollaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains.Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments.Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI.What this paper addsWe extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration.We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom.From our discussions with the teacher, we define two types ofinflection pointsto address students' STEM+C difficulties—thedifficulty thresholdand theintervention point—and discuss how thefeedback latency intervalseparating them can inform educator interventions.We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline.Implications for practice and/or policyOur case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback.Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment.Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.
more »
« less
This content will become publicly available on May 14, 2026
Artificial Intelligence Workshop: Introducing AI Concepts Using Neural Style Transfer
With the exponential growth of Artificial Intelligence (AI) technologies in recent years, it is more important than ever to increase AI literacy for our students at multiple grade levels, including college students and K-12 students. AI is a broad and complex topic that requires a steep learning curve for most students. This research project focuses on determining the best way to introduceAI to students with little to no prior experience in AI within the scope of a one-hour workshop setting, while still making sure they are having fun exploring this new technology.
more »
« less
- Award ID(s):
- 2315804
- PAR ID:
- 10637550
- Publisher / Repository:
- Florida Online Journals
- Date Published:
- Journal Name:
- The International FLAIRS Conference Proceedings
- Volume:
- 38
- ISSN:
- 2334-0754
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Artificial intelligence (AI) has gained widespread public interest in recent years. However, as AI literacy remained excluded from the standard academic curricula, AI education in the US was predominantly offered through extra-curricular activities, which limited AI learning exposure to only a select group of students. Given these limitations, the need to integrate AI literacy education into the standard curricula is increasingly evident. This study investigated the integration of AI learning in an advanced biology course. Thirty-seven students participated in four lessons embedding AI learning in biology contexts. The interplay of students’ AI learning and biology knowledge was examined from the quantitative measure of conceptual understanding and qualitative analysis of interdisciplinary reasoning. This concurrent triangulation research design utilized results from both quantitative and qualitative analyses to develop a comprehensive understanding of students’ AI learning in the biology context. The results of the study showed a significant improvement in students’ AI concepts. Students’ biology knowledge had a slight increase, but it was not statistically significant. Both quantitative and qualitative results underscored a close connection between students’ AI learning and their biology knowledge, though the quantitative findings were not conclusive in some lessons. The article concluded with a discussion of the potential reasons for those discrepancies. In addition, suggestions were provided for future research and practitioners who are interested in integrating AI education across curricula.more » « less
-
Abstract To date, many AI initiatives (eg, AI4K12, CS for All) developed standards and frameworks as guidance for educators to create accessible and engaging Artificial Intelligence (AI) learning experiences for K‐12 students. These efforts revealed a significant need to prepare youth to gain a fundamental understanding of how intelligence is created, applied, and its potential to perpetuate bias and unfairness. This study contributes to the growing interest in K‐12 AI education by examining student learning of modelling real‐world text data. Four students from an Advanced Placement computer science classroom at a public high school participated in this study. Our qualitative analysis reveals that the students developed nuanced and in‐depth understandings of how text classification models—a type of AI application—are trained. Specifically, we found that in modelling texts, students: (1) drew on their social experiences and cultural knowledge to create predictive features, (2) engineered predictive features to address model errors, (3) described model learning patterns from training data and (4) reasoned about noisy features when comparing models. This study contributes to an initial understanding of student learning of modelling unstructured data and offers implications for scaffolding in‐depth reasoning about model decision making. Practitioner notesWhat is already known about this topicScholarly attention has turned to examining Artificial Intelligence (AI) literacy in K‐12 to help students understand the working mechanism of AI technologies and critically evaluate automated decisions made by computer models.While efforts have been made to engage students in understanding AI through building machine learning models with data, few of them go in‐depth into teaching and learning of feature engineering, a critical concept in modelling data.There is a need for research to examine students' data modelling processes, particularly in the little‐researched realm of unstructured data.What this paper addsResults show that students developed nuanced understandings of models learning patterns in data for automated decision making.Results demonstrate that students drew on prior experience and knowledge in creating features from unstructured data in the learning task of building text classification models.Students needed support in performing feature engineering practices, reasoning about noisy features and exploring features in rich social contexts that the data set is situated in.Implications for practice and/or policyIt is important for schools to provide hands‐on model building experiences for students to understand and evaluate automated decisions from AI technologies.Students should be empowered to draw on their cultural and social backgrounds as they create models and evaluate data sources.To extend this work, educators should consider opportunities to integrate AI learning in other disciplinary subjects (ie, outside of computer science classes).more » « less
-
With the growing prevalence of AI, the need for K-12 AI education is becoming more crucial, which is prompting active research in developing engaging and age-appropriate AI learning activities. Efforts are underway, such as those by the AI4K12 initiative, to establish guidelines for organizing K- 12 AI education; however, effective instructional resources are needed by educators. In this paper, we describe our work to design, develop, and implement an unplugged activity centered on facial recognition technology for middle school students. Facial recognition is integrated into a wide range of applications throughout daily life, which makes it a familiar and engaging tool for students and an effective medium for conveying AI concepts. Our unplugged activity, “Guess Whose Face,” is designed as a board game that focuses on Representation and Reasoning from AI4K12’s 5 Big Ideas in AI. The game is crafted to enable students to develop AI competencies naturally through physical interaction. In the game, one student uses tracing paper to extract facial features from a familiar face shown on a card, such as a cartoon character or celebrity, and then other students try to guess the identity of the hidden face. We discuss details of the game, its iterative refinement, and initial findings from piloting the activity during a summer camp for rural middle school students.more » « less
-
Although the prevention of AI vulnerabilities is critical to preserve the safety and privacy of users and businesses, educational tools for robust AI are still underdeveloped worldwide. We present the design, implementation, and assessment of Maestro. Maestro is an effective open-source game-based platform that contributes to the advancement of robust AI education. Maestro provides goal-based scenarios where college students are exposed to challenging life-inspired assignments in a competitive programming environment. We assessed Maestro's influence on students' engagement, motivation, and learning success in robust AI. This work also provides insights into the design features of online learning tools that promote active learning opportunities in the robust AI domain. We analyzed the reflection responses (measured with Likert scales) of 147 undergraduate students using Maestro in two quarterly college courses in AI. According to the results, students who felt the acquisition of new skills in robust AI tended to appreciate highly Maestro and scored highly on material consolidation, curiosity, and maestry in robust AI. Moreover, the leaderboard, our key gamification element in Maestro, has effectively contributed to students' engagement and learning. Results also indicate that Maestro can be effectively adapted to any course length and depth without losing its educational quality.more » « less
An official website of the United States government
