AbstractThe relative effectiveness of reflection either through student generation of contrasting cases or through provided contrasting cases is not well‐established for adult learners. This paper presents a classroom study to investigate this comparison in a college level Computer Science (CS) course where groups of students worked collaboratively to design database access strategies. Forty‐four teams were randomly assigned to three reflection conditions ([GEN] directive to generate a contrasting case to the student solution and evaluate their trade‐offs in light of the principle, [CONT] directive to compare the student solution with a provided contrasting case and evaluate their trade‐offs in light of a principle, and [NSI] a control condition with a non‐specific directive for reflection evaluating the student solution in light of a principle). In the CONT condition, as an illustration of the use of LLMs to exemplify knowledge transformation beyond knowledge construction in the generation of an automated contribution to a collaborative learning discussion, an LLM generated a contrasting case to a group's solution to exemplify application of an alternative problem solving strategy in a way that highlighted the contrast by keeping many concrete details the same as those the group had most recently collaboratively constructed. While there was no main effect of condition on learning based on a content test, low‐pretest student learned more from CONT than GEN, with NSI not distinguishable from the other two, while high‐pretest students learned marginally more from the GEN condition than the CONT condition, with NSI not distinguishable from the other two. Practitioner notesWhat is already known about this topicReflection during or even in place of computer programming is beneficial for learning of principles for advanced computer science when the principles are new to students.Generation of contrasting cases and comparing contrasting cases have both been demonstrated to be effective as opportunities to learn from reflection in some contexts, though questions remain about ideal applicability conditions for adult learners.Intelligent conversational agents can be used effectively to deliver stimuli for reflection during collaborative learning, though room for improvement remains, which provides an opportunity to demonstrate the potential positive contribution of large language models (LLMs).What this paper addsThe study contributes new knowledge related to the differences in applicability conditions between generation of contrasting cases and comparison across provided contrasting cases for adult learning.The paper presents an application of LLMs as a tool to provide contrasting cases tailored to the details of actual student solutions.The study provides evidence from a classroom intervention study for positive impact on student learning of an LLM‐enabled intervention.Implications for practice and/or policyAdvanced computer science curricula should make substantial room for reflection alongside problem solving.Instructors should provide reflection opportunities for students tailored to their level of prior knowledge.Instructors would benefit from training to use LLMs as tools for providing effective contrasting cases, especially for low‐prior‐knowledge students.
more »
« less
Leveraging complexity frameworks to refine theories of engagement: Advancing self‐regulated learning in the age of artificial intelligence
Abstract Capturing evidence for dynamic changes in self‐regulated learning (SRL) behaviours resulting from interventions is challenging for researchers. In the current study, we identified students who were likely to do poorly in a biology course and those who were likely to do well. Then, we randomly assigned a portion of the students predicted to perform poorly to a science of learning to learn intervention where they were taught SRL study strategies. Learning outcome and log data (257 K events) were collected fromn = 226 students. We used a complex systems framework to model the differences in SRL including the amount, interrelatedness, density and regularity of engagement captured in digital trace data (ie, logs). Differences were compared between students who were predicted to (1) perform poorly (control,n = 48), (2) perform poorly and received intervention (treatment,n = 95) and (3) perform well (not flagged,n = 83). Results indicated that the regularity of students' engagement was predictive of course grade, and that the intervention group exhibited increased regularity in engagement over the control group immediately after the intervention and maintained that increase over the course of the semester. We discuss the implications of these findings in relation to the future of artificial intelligence and potential uses for monitoring student learning in online environments. Practitioner notesWhat is already known about this topicSelf‐regulated learning (SRL) knowledge and skills are strong predictors of postsecondary STEM student success.SRL is a dynamic, temporal process that leads to purposeful student engagement.Methods and metrics for measuring dynamic SRL behaviours in learning contexts are needed.What this paper addsA Markov process for measuring dynamic SRL processes using log data.Evidence that dynamic, interaction‐dominant aspects of SRL predict student achievement.Evidence that SRL processes can be meaningfully impacted through educational intervention.Implications for theory and practiceComplexity approaches inform theory and measurement of dynamic SRL processes.Static representations of dynamic SRL processes are promising learning analytics metrics.Engineered features of LMS usage are valuable contributions to AI models.
more »
« less
- Award ID(s):
- 1821601
- PAR ID:
- 10419812
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- British Journal of Educational Technology
- Volume:
- 54
- Issue:
- 5
- ISSN:
- 0007-1013
- Format(s):
- Medium: X Size: p. 1204-1221
- Size(s):
- p. 1204-1221
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With increasing interest in computer‐assisted educa‐ tion, AI‐integrated systems become highly applicable with their ability to adapt based on user interactions. In this context, this paper focuses on understanding and analysing first‐year undergraduate student responses to an intelligent educational system that applies multi‐agent reinforcement learning as an AI tutor. With human–computer interaction at the centre, we discuss principles of interface design and educational gamification in the context of multiple years of student observations, student feedback surveys and focus group interviews. We show positive feedback from the design methodology we discuss as well as the overall process of providing automated tutoring in a gamified virtual environment. We also discuss students' thinking in the context of gamified educational systems, as well as unexpected issues that may arise when implementing such systems. Ultimately, our design iterations and analysis both offer new insights for practical implementation of computer‐assisted educational systems, focusing on how AI can augment, rather than replace, human intelligence in the classroom. Practitioner notesWhat is already known about this topicAI‐integrated systems show promise for personalizing learning and improving student education.Existing research has shown the value of personalized learner feedback.Engaged students learn more effectively.What this paper addsStudent opinions of and responses to an HCI‐based personalized educational system.New insights for practical implementation of AI‐integrated educational systems informed by years of student observations and system improvements.Qualitative insights into system design to improve human–computer interaction in educational systems.Implications for practice and/or policyActionable design principles for computer‐assisted tutoring systems derived from first‐hand student feedback and observations.Encourage new directions for human–computer interaction in educational systems.more » « less
-
Abstract Preparing preservice teachers (PSTs) to be able to notice, interpret, respond to and orchestrate student ideas—the core practices of responsive teaching—is a key goal for contemporary science and mathematics teacher education. This mixed‐methods study, employing a virtual reality (VR)‐supported simulation integrated with artificial intelligence (AI)‐powered virtual students, explored the frequent patterns of PSTs' talk moves as they attempted to orchestrate a responsive discussion, as well as the affordances and challenges of leveraging AI‐supported virtual simulation to enhance PSTs' responsive teaching skills. Sequential analysis of the talk moves of both PSTs (n = 24) and virtual students indicated that although PSTs did employ responsive talk moves, they encountered difficulties in transitioning from the authoritative, teacher‐centred teaching approach to a responsive way of teaching. The qualitative analysis with triangulated dialogue transcripts, observational field notes and semi‐structured interviews revealed participants' engagement in (1) orchestrating discussion by leveraging the design features of AI‐supported simulation, (2) iterative rehearsals through naturalistic and contextualized interactions and (3) exploring realism and boundaries in AI‐powered virtual students. The study findings provide insights into the potential of leveraging AI‐supported virtual simulation to improve PSTs' responsive teaching skills. The study also underscores the need for PSTs to engage in well‐designed pedagogical practices with adaptive and in situ support. Practitioner notesWhat is already known about this topicDeveloping the teaching capacity of responsive teaching is an important goal for preservice teacher (PST) education. PSTs need systematic opportunities to build fluency in this approach.Virtual simulations can provide PSTs with the opportunities to practice interactive teaching and have been shown to improve their teaching skills.Artificial intelligence (AI)‐powered virtual students can be integrated into virtual simulations to enable interactive and authentic practice of teaching.What this paper addsAI‐supported simulation has the potential to support PSTs' responsive teaching skills.While PSTs enact responsive teaching talk moves, they struggle to enact those talk moves in challenging teaching scenarios due to limited epistemic and pedagogical resources.AI‐supported simulation affords iterative and contextualized opportunities for PSTs to practice responsive teaching talk moves; it challenges teachers to analyse student discourse and respond in real time.Implications for practice and/or policyPSTs should build a teaching repertoire with both basic and advanced responsive talk moves.The learning module should adapt to PSTs' prior experience and provide PSTs with in situ learning support to navigate challenging teaching scenarios.Integrating interaction features and AI‐based virtual students into the simulation can facilitate PSTs' active participation.more » « less
-
While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement.more » « less
-
Abstract Much attention in constructionism has focused on designing tools and activities that support learners in designing fully finished and functional applications and artefacts to be shared with others. But helping students learn to debug their applications often takes on a surprisingly more instructionist stance by giving them checklists, teaching them strategies or providing them with test programmes. The idea of designing bugs for learning—ordebugging by design—makes learners agents of their own learning and, more importantly, of making and solving mistakes. In this paper, we report on our implementation of ‘Debugging by Design’ activities in a high school classroom over a period of 8 hours as part of an electronic textiles unit. Students were tasked to craft the electronic textile artefacts with problems or bugs for their peers to solve. Drawing on observations and interviews, we answer the following research questions: (1) How did students participate in making bugs for others? (2) What did students gain from designing and solving bugs for others? In the discussion, we address the opportunities and challenges that designing personally and socially meaningful failure artefacts provides for becoming objects‐to‐think‐with and objects‐to‐share‐with in student learning and promoting new directions in constructionism. Practitioner notesWhat is already known about this topicThere is substantial evidence for the benefits of learning programming and debugging in the context of constructing personally relevant and complex artefacts, including electronic textiles.Related, work on productive failure has demonstrated that providing learners with strategically difficult problems (in which they ‘fail’) equips them to better handle subsequent challenges.What this paper addsIn this paper, we argue that designing bugs or ‘failure artefacts’ is as much a constructionist approach to learning as is designing fully functional artefacts.We consider how ‘failure artefacts’ can be both objects‐to‐learn‐with and objects‐to‐share‐with.We introduce the concept of ‘Debugging by Design’ (DbD) as a means to expand application of constructionism to the context of developing ‘failure artifacts’.Implications for practice and/or policyWe conceptualise a new way to enable and empower students in debugging—by designing creative, multimodal buggy projects for others to solve.The DbD approach may support students in near‐transfer of debugging and the beginning of a more systematic approach to debugging in later projects and should be explored in other domains beyond e‐textiles.New studies should explore learning, design and teaching that empower students to design bugs in projects in mischievous and creative ways.more » « less
An official website of the United States government
