AbstractRecent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross adifficulty threshold,and (2) thepoint of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notesWhat is already known about this topicCollaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains.Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments.Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI.What this paper addsWe extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration.We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom.From our discussions with the teacher, we define two types ofinflection pointsto address students' STEM+C difficulties—thedifficulty thresholdand theintervention point—and discuss how thefeedback latency intervalseparating them can inform educator interventions.We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline.Implications for practice and/or policyOur case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback.Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment.Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.
more »
« less
Designing an AI-Enhanced Timeline for Monitoring Multimodal Interactions in Embodied Learning Environments
Embodied learning represents a natural and immersive approach to education, where the physical engagement of learners plays a critical role in how they perceive and internalize concepts. This allows students to actively embody and explore knowledge through interaction with their environment, significantly enhancing retention and understanding of complex subjects. However, researchers face significant challenges in exploring children's learning in these physically interactive spaces, particularly due to the complexity of tracking multiple students' movements and dynamic interactions in real-time. To address these challenges, this paper introduces a Double Diamond design thinking process for developing an AI-enhanced timeline aimed at assisting researchers in visualizing and analyzing interactions within embodied learning environments. We outline key considerations, challenges, and lessons learned in this user-centered design process. Our goal is to create a timeline that employs state-of-the-art AI techniques to help researchers interpret complex datasets, such as children's movements, gaze directions, and affective states during learning activities, thereby simplifying their tasks and augmenting the process of interaction analysis.
more »
« less
- Award ID(s):
- 2112635
- PAR ID:
- 10637581
- Publisher / Repository:
- International Conference on Computers in Education
- Date Published:
- Journal Name:
- International Conference on Computers in Education
- ISSN:
- 3078-4360
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This study examined the effects of embodied learning experiences on students’ understanding of computational thinking (CT) concepts and their ability to solve CT problems. In a mixed-reality learning environment, students mapped CT concepts, such as sequencing and loops, onto their bodily movements. These movements were later applied to robot programming tasks, where students used the same CT concepts in a different modality. By explicitly connecting embodied actions with programming tasks, the intervention aimed to enhance students’ comprehension and transfer of CT skills. Forty-four first- and second-grade students participated in the study. The results showed significant improvements in students’ CT competency and positive attitudes toward CT. Additionally, an analysis of robot programming performance identified common errors and revealed how students employed embodied strategies to overcome challenges. The effects of embodied learning and the impact of embodied learning strategies were discussed.more » « less
-
Embodied interaction is particularly useful in museums because it allows to leverage findings from embodied cognition to support the learning of STEM concepts and thinking skills. In this paper, we focus on Human-Data Interaction (HDI), a class of embodied interactions that investigates the design of interactive data visualizations that users control with gestures and body movements. We describe an HDI system that we iteratively designed, implemented, and observed at a science museum, and that allows visitors to explore large sets of data on two 3D globe maps. We present and discuss design strategies and optimization that we implemented to mitigate two sets of design challenges: (1) Dealing with display, interaction, and affordance blindness; and, (2) Supporting multiple functionalities and collaboration.more » « less
-
Cohen, J; Solano, G (Ed.)This study investigates the effects of embodied learning experiences in learning abstract concepts, such as computational thinking (CT), among young learners. Specifically, it examines whether the benefits of embodied learning can be replicated within a mixed-reality setting, where students engage with virtual objects to perform CT tasks. A group of ten first-grade students from an elementary school participated, engaging in embodied learning activities followed by assessments in CT. Through the analysis of video recordings, it was observed that participants could effectively articulate CT concepts, including the understanding of programming code meanings and their sequences, through their bodily movements. The congruence between students’ bodily movement and CT concepts was found to be advantageous for their comprehension. However, the study also noted instances of incongruent movements that did not align with the intended CT concepts, which attracted researchers’ attentions. The study identified two distinct types of embodiment manifested in the mixed-reality environment, shedding light on the nuanced dynamics of embodied learning in the context of CT education.more » « less
-
Efficient performance and acquisition of physical skills, from sports techniques to surgical procedures, require instruction and feedback. In the absence of a human expert, Mixed Reality Intelligent Task Support (MixITS) can offer a promising alternative. These systems integrate Artificial Intelligence (AI) and Mixed Reality (MR) to provide realtime feedback and instruction as users practice and learn skills using physical tools and objects. However, designing MixITS systems presents challenges beyond engineering complexities. The complex interactions between users, AI, MR interfaces, and the physical environment create unique design obstacles. To address these challenges, we present MixITS-Kit—an interaction design toolkit derived from our analysis of MixITS prototypes developed by eight student teams during a 10-week-long graduate course. Our toolkit comprises design considerations, design patterns, and an interaction canvas. Our evaluation suggests that the toolkit can serve as a valuable resource for novice practitioners designing MixITS systems and researchers developing new tools for human-AI interaction design.more » « less
An official website of the United States government

