skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Providing tailored reflection instructions in collaborative learning using large language models
AbstractThe relative effectiveness of reflection either through student generation of contrasting cases or through provided contrasting cases is not well‐established for adult learners. This paper presents a classroom study to investigate this comparison in a college level Computer Science (CS) course where groups of students worked collaboratively to design database access strategies. Forty‐four teams were randomly assigned to three reflection conditions ([GEN] directive to generate a contrasting case to the student solution and evaluate their trade‐offs in light of the principle, [CONT] directive to compare the student solution with a provided contrasting case and evaluate their trade‐offs in light of a principle, and [NSI] a control condition with a non‐specific directive for reflection evaluating the student solution in light of a principle). In the CONT condition, as an illustration of the use of LLMs to exemplify knowledge transformation beyond knowledge construction in the generation of an automated contribution to a collaborative learning discussion, an LLM generated a contrasting case to a group's solution to exemplify application of an alternative problem solving strategy in a way that highlighted the contrast by keeping many concrete details the same as those the group had most recently collaboratively constructed. While there was no main effect of condition on learning based on a content test, low‐pretest student learned more from CONT than GEN, with NSI not distinguishable from the other two, while high‐pretest students learned marginally more from the GEN condition than the CONT condition, with NSI not distinguishable from the other two. Practitioner notesWhat is already known about this topicReflection during or even in place of computer programming is beneficial for learning of principles for advanced computer science when the principles are new to students.Generation of contrasting cases and comparing contrasting cases have both been demonstrated to be effective as opportunities to learn from reflection in some contexts, though questions remain about ideal applicability conditions for adult learners.Intelligent conversational agents can be used effectively to deliver stimuli for reflection during collaborative learning, though room for improvement remains, which provides an opportunity to demonstrate the potential positive contribution of large language models (LLMs).What this paper addsThe study contributes new knowledge related to the differences in applicability conditions between generation of contrasting cases and comparison across provided contrasting cases for adult learning.The paper presents an application of LLMs as a tool to provide contrasting cases tailored to the details of actual student solutions.The study provides evidence from a classroom intervention study for positive impact on student learning of an LLM‐enabled intervention.Implications for practice and/or policyAdvanced computer science curricula should make substantial room for reflection alongside problem solving.Instructors should provide reflection opportunities for students tailored to their level of prior knowledge.Instructors would benefit from training to use LLMs as tools for providing effective contrasting cases, especially for low‐prior‐knowledge students.  more » « less
Award ID(s):
2222762
PAR ID:
10644940
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
British Journal of Educational Technology
Volume:
56
Issue:
2
ISSN:
0007-1013
Format(s):
Medium: X Size: p. 531-550
Size(s):
p. 531-550
Sponsoring Org:
National Science Foundation
More Like this
  1. With increasing interest in computer‐assisted educa‐ tion, AI‐integrated systems become highly applicable with their ability to adapt based on user interactions. In this context, this paper focuses on understanding and analysing first‐year undergraduate student responses to an intelligent educational system that applies multi‐agent reinforcement learning as an AI tutor. With human–computer interaction at the centre, we discuss principles of interface design and educational gamification in the context of multiple years of student observations, student feedback surveys and focus group interviews. We show positive feedback from the design methodology we discuss as well as the overall process of providing automated tutoring in a gamified virtual environment. We also discuss students' thinking in the context of gamified educational systems, as well as unexpected issues that may arise when implementing such systems. Ultimately, our design iterations and analysis both offer new insights for practical implementation of computer‐assisted educational systems, focusing on how AI can augment, rather than replace, human intelligence in the classroom. Practitioner notesWhat is already known about this topicAI‐integrated systems show promise for personalizing learning and improving student education.Existing research has shown the value of personalized learner feedback.Engaged students learn more effectively.What this paper addsStudent opinions of and responses to an HCI‐based personalized educational system.New insights for practical implementation of AI‐integrated educational systems informed by years of student observations and system improvements.Qualitative insights into system design to improve human–computer interaction in educational systems.Implications for practice and/or policyActionable design principles for computer‐assisted tutoring systems derived from first‐hand student feedback and observations.Encourage new directions for human–computer interaction in educational systems. 
    more » « less
  2. While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement. 
    more » « less
  3. Abstract This paper provides an experience report on a co‐design approach with teachers to co‐create learning analytics‐based technology to support problem‐based learning in middle school science classrooms. We have mapped out a workflow for such applications and developed design narratives to investigate the implementation, modifications and temporal roles of the participants in the design process. Our results provide precedent knowledge on co‐designing with experienced and novice teachers and co‐constructing actionable insight that can help teachers engage more effectively with their students' learning and problem‐solving processes during classroom PBL implementations. Practitioner notesWhat is already known about this topicSuccess of educational technology depends in large part on the technology's alignment with teachers' goals for their students, teaching strategies and classroom context.Teacher and researcher co‐design of educational technology and supporting curricula has proven to be an effective way for integrating teacher insight and supporting their implementation needs.Co‐designing learning analytics and support technologies with teachers is difficult due to differences in design and development goals, workplace norms, and AI‐literacy and learning analytics background of teachers.What this paper addsWe provide a co‐design workflow for middle school teachers that centres on co‐designing and developing actionable insights to support problem‐based learning (PBL) by systematic development of responsive teaching practices using AI‐generated learning analytics.We adapt established human‐computer interaction (HCI) methods to tackle the complex task of classroom PBL implementation, working with experienced and novice teachers to create a learning analytics dashboard for a PBL curriculum.We demonstrate researcher and teacher roles and needs in ensuring co‐design collaboration and the co‐construction of actionable insight to support middle school PBL.Implications for practice and/or policyLearning analytics researchers will be able to use the workflow as a tool to support their PBL co‐design processes.Learning analytics researchers will be able to apply adapted HCI methods for effective co‐design processes.Co‐design teams will be able to pre‐emptively prepare for the difficulties and needs of teachers when integrating middle school teacher feedback during the co‐design process in support of PBL technologies. 
    more » « less
  4. Abstract To date, many AI initiatives (eg, AI4K12, CS for All) developed standards and frameworks as guidance for educators to create accessible and engaging Artificial Intelligence (AI) learning experiences for K‐12 students. These efforts revealed a significant need to prepare youth to gain a fundamental understanding of how intelligence is created, applied, and its potential to perpetuate bias and unfairness. This study contributes to the growing interest in K‐12 AI education by examining student learning of modelling real‐world text data. Four students from an Advanced Placement computer science classroom at a public high school participated in this study. Our qualitative analysis reveals that the students developed nuanced and in‐depth understandings of how text classification models—a type of AI application—are trained. Specifically, we found that in modelling texts, students: (1) drew on their social experiences and cultural knowledge to create predictive features, (2) engineered predictive features to address model errors, (3) described model learning patterns from training data and (4) reasoned about noisy features when comparing models. This study contributes to an initial understanding of student learning of modelling unstructured data and offers implications for scaffolding in‐depth reasoning about model decision making. Practitioner notesWhat is already known about this topicScholarly attention has turned to examining Artificial Intelligence (AI) literacy in K‐12 to help students understand the working mechanism of AI technologies and critically evaluate automated decisions made by computer models.While efforts have been made to engage students in understanding AI through building machine learning models with data, few of them go in‐depth into teaching and learning of feature engineering, a critical concept in modelling data.There is a need for research to examine students' data modelling processes, particularly in the little‐researched realm of unstructured data.What this paper addsResults show that students developed nuanced understandings of models learning patterns in data for automated decision making.Results demonstrate that students drew on prior experience and knowledge in creating features from unstructured data in the learning task of building text classification models.Students needed support in performing feature engineering practices, reasoning about noisy features and exploring features in rich social contexts that the data set is situated in.Implications for practice and/or policyIt is important for schools to provide hands‐on model building experiences for students to understand and evaluate automated decisions from AI technologies.Students should be empowered to draw on their cultural and social backgrounds as they create models and evaluate data sources.To extend this work, educators should consider opportunities to integrate AI learning in other disciplinary subjects (ie, outside of computer science classes). 
    more » « less
  5. Abstract Capturing evidence for dynamic changes in self‐regulated learning (SRL) behaviours resulting from interventions is challenging for researchers. In the current study, we identified students who were likely to do poorly in a biology course and those who were likely to do well. Then, we randomly assigned a portion of the students predicted to perform poorly to a science of learning to learn intervention where they were taught SRL study strategies. Learning outcome and log data (257 K events) were collected fromn = 226 students. We used a complex systems framework to model the differences in SRL including the amount, interrelatedness, density and regularity of engagement captured in digital trace data (ie, logs). Differences were compared between students who were predicted to (1) perform poorly (control,n = 48), (2) perform poorly and received intervention (treatment,n = 95) and (3) perform well (not flagged,n = 83). Results indicated that the regularity of students' engagement was predictive of course grade, and that the intervention group exhibited increased regularity in engagement over the control group immediately after the intervention and maintained that increase over the course of the semester. We discuss the implications of these findings in relation to the future of artificial intelligence and potential uses for monitoring student learning in online environments. Practitioner notesWhat is already known about this topicSelf‐regulated learning (SRL) knowledge and skills are strong predictors of postsecondary STEM student success.SRL is a dynamic, temporal process that leads to purposeful student engagement.Methods and metrics for measuring dynamic SRL behaviours in learning contexts are needed.What this paper addsA Markov process for measuring dynamic SRL processes using log data.Evidence that dynamic, interaction‐dominant aspects of SRL predict student achievement.Evidence that SRL processes can be meaningfully impacted through educational intervention.Implications for theory and practiceComplexity approaches inform theory and measurement of dynamic SRL processes.Static representations of dynamic SRL processes are promising learning analytics metrics.Engineered features of LMS usage are valuable contributions to AI models. 
    more » « less