Abstract This paper provides an experience report on a co‐design approach with teachers to co‐create learning analytics‐based technology to support problem‐based learning in middle school science classrooms. We have mapped out a workflow for such applications and developed design narratives to investigate the implementation, modifications and temporal roles of the participants in the design process. Our results provide precedent knowledge on co‐designing with experienced and novice teachers and co‐constructing actionable insight that can help teachers engage more effectively with their students' learning and problem‐solving processes during classroom PBL implementations. Practitioner notesWhat is already known about this topicSuccess of educational technology depends in large part on the technology's alignment with teachers' goals for their students, teaching strategies and classroom context.Teacher and researcher co‐design of educational technology and supporting curricula has proven to be an effective way for integrating teacher insight and supporting their implementation needs.Co‐designing learning analytics and support technologies with teachers is difficult due to differences in design and development goals, workplace norms, and AI‐literacy and learning analytics background of teachers.What this paper addsWe provide a co‐design workflow for middle school teachers that centres on co‐designing and developing actionable insights to support problem‐based learning (PBL) by systematic development of responsive teaching practices using AI‐generated learning analytics.We adapt established human‐computer interaction (HCI) methods to tackle the complex task of classroom PBL implementation, working with experienced and novice teachers to create a learning analytics dashboard for a PBL curriculum.We demonstrate researcher and teacher roles and needs in ensuring co‐design collaboration and the co‐construction of actionable insight to support middle school PBL.Implications for practice and/or policyLearning analytics researchers will be able to use the workflow as a tool to support their PBL co‐design processes.Learning analytics researchers will be able to apply adapted HCI methods for effective co‐design processes.Co‐design teams will be able to pre‐emptively prepare for the difficulties and needs of teachers when integrating middle school teacher feedback during the co‐design process in support of PBL technologies.
more »
« less
Seeking to support preservice teachers' responsive teaching: Leveraging artificial intelligence‐supported virtual simulation
Abstract Preparing preservice teachers (PSTs) to be able to notice, interpret, respond to and orchestrate student ideas—the core practices of responsive teaching—is a key goal for contemporary science and mathematics teacher education. This mixed‐methods study, employing a virtual reality (VR)‐supported simulation integrated with artificial intelligence (AI)‐powered virtual students, explored the frequent patterns of PSTs' talk moves as they attempted to orchestrate a responsive discussion, as well as the affordances and challenges of leveraging AI‐supported virtual simulation to enhance PSTs' responsive teaching skills. Sequential analysis of the talk moves of both PSTs (n = 24) and virtual students indicated that although PSTs did employ responsive talk moves, they encountered difficulties in transitioning from the authoritative, teacher‐centred teaching approach to a responsive way of teaching. The qualitative analysis with triangulated dialogue transcripts, observational field notes and semi‐structured interviews revealed participants' engagement in (1) orchestrating discussion by leveraging the design features of AI‐supported simulation, (2) iterative rehearsals through naturalistic and contextualized interactions and (3) exploring realism and boundaries in AI‐powered virtual students. The study findings provide insights into the potential of leveraging AI‐supported virtual simulation to improve PSTs' responsive teaching skills. The study also underscores the need for PSTs to engage in well‐designed pedagogical practices with adaptive and in situ support. Practitioner notesWhat is already known about this topicDeveloping the teaching capacity of responsive teaching is an important goal for preservice teacher (PST) education. PSTs need systematic opportunities to build fluency in this approach.Virtual simulations can provide PSTs with the opportunities to practice interactive teaching and have been shown to improve their teaching skills.Artificial intelligence (AI)‐powered virtual students can be integrated into virtual simulations to enable interactive and authentic practice of teaching.What this paper addsAI‐supported simulation has the potential to support PSTs' responsive teaching skills.While PSTs enact responsive teaching talk moves, they struggle to enact those talk moves in challenging teaching scenarios due to limited epistemic and pedagogical resources.AI‐supported simulation affords iterative and contextualized opportunities for PSTs to practice responsive teaching talk moves; it challenges teachers to analyse student discourse and respond in real time.Implications for practice and/or policyPSTs should build a teaching repertoire with both basic and advanced responsive talk moves.The learning module should adapt to PSTs' prior experience and provide PSTs with in situ learning support to navigate challenging teaching scenarios.Integrating interaction features and AI‐based virtual students into the simulation can facilitate PSTs' active participation.
more »
« less
- Award ID(s):
- 2110777
- PAR ID:
- 10581957
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- British Journal of Educational Technology
- Volume:
- 56
- Issue:
- 3
- ISSN:
- 0007-1013
- Format(s):
- Medium: X Size: p. 1148-1169
- Size(s):
- p. 1148-1169
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
With increasing interest in computer‐assisted educa‐ tion, AI‐integrated systems become highly applicable with their ability to adapt based on user interactions. In this context, this paper focuses on understanding and analysing first‐year undergraduate student responses to an intelligent educational system that applies multi‐agent reinforcement learning as an AI tutor. With human–computer interaction at the centre, we discuss principles of interface design and educational gamification in the context of multiple years of student observations, student feedback surveys and focus group interviews. We show positive feedback from the design methodology we discuss as well as the overall process of providing automated tutoring in a gamified virtual environment. We also discuss students' thinking in the context of gamified educational systems, as well as unexpected issues that may arise when implementing such systems. Ultimately, our design iterations and analysis both offer new insights for practical implementation of computer‐assisted educational systems, focusing on how AI can augment, rather than replace, human intelligence in the classroom. Practitioner notesWhat is already known about this topicAI‐integrated systems show promise for personalizing learning and improving student education.Existing research has shown the value of personalized learner feedback.Engaged students learn more effectively.What this paper addsStudent opinions of and responses to an HCI‐based personalized educational system.New insights for practical implementation of AI‐integrated educational systems informed by years of student observations and system improvements.Qualitative insights into system design to improve human–computer interaction in educational systems.Implications for practice and/or policyActionable design principles for computer‐assisted tutoring systems derived from first‐hand student feedback and observations.Encourage new directions for human–computer interaction in educational systems.more » « less
-
AbstractRecent advances in generative artificial intelligence (AI) and multimodal learning analytics (MMLA) have allowed for new and creative ways of leveraging AI to support K12 students' collaborative learning in STEM+C domains. To date, there is little evidence of AI methods supporting students' collaboration in complex, open‐ended environments. AI systems are known to underperform humans in (1) interpreting students' emotions in learning contexts, (2) grasping the nuances of social interactions and (3) understanding domain‐specific information that was not well‐represented in the training data. As such, combined human and AI (ie, hybrid) approaches are needed to overcome the current limitations of AI systems. In this paper, we take a first step towards investigating how a human‐AI collaboration between teachers and researchers using an AI‐generated multimodal timeline can guide and support teachers' feedback while addressing students' STEM+C difficulties as they work collaboratively to build computational models and solve problems. In doing so, we present a framework characterizing the human component of our human‐AI partnership as a collaboration between teachers and researchers. To evaluate our approach, we present our timeline to a high school teacher and discuss the key insights gleaned from our discussions. Our case study analysis reveals the effectiveness of an iterative approach to using human‐AI collaboration to address students' STEM+C challenges: the teacher can use the AI‐generated timeline to guide formative feedback for students, and the researchers can leverage the teacher's feedback to help improve the multimodal timeline. Additionally, we characterize our findings with respect to two events of interest to the teacher: (1) when the students cross adifficulty threshold,and (2) thepoint of intervention, that is, when the teacher (or system) should intervene to provide effective feedback. It is important to note that the teacher explained that there should be a lag between (1) and (2) to give students a chance to resolve their own difficulties. Typically, such a lag is not implemented in computer‐based learning environments that provide feedback. Practitioner notesWhat is already known about this topicCollaborative, open‐ended learning environments enhance students' STEM+C conceptual understanding and practice, but they introduce additional complexities when students learn concepts spanning multiple domains.Recent advances in generative AI and MMLA allow for integrating multiple datastreams to derive holistic views of students' states, which can support more informed feedback mechanisms to address students' difficulties in complex STEM+C environments.Hybrid human‐AI approaches can help address collaborating students' STEM+C difficulties by combining the domain knowledge, emotional intelligence and social awareness of human experts with the general knowledge and efficiency of AI.What this paper addsWe extend a previous human‐AI collaboration framework using a hybrid intelligence approach to characterize the human component of the partnership as a researcher‐teacher partnership and present our approach as a teacher‐researcher‐AI collaboration.We adapt an AI‐generated multimodal timeline to actualize our human‐AI collaboration by pairing the timeline with videos of students encountering difficulties, engaging in active discussions with a high school teacher while watching the videos to discern the timeline's utility in the classroom.From our discussions with the teacher, we define two types ofinflection pointsto address students' STEM+C difficulties—thedifficulty thresholdand theintervention point—and discuss how thefeedback latency intervalseparating them can inform educator interventions.We discuss two ways in which our teacher‐researcher‐AI collaboration can help teachers support students encountering STEM+C difficulties: (1) teachers using the multimodal timeline to guide feedback for students, and (2) researchers using teachers' input to iteratively refine the multimodal timeline.Implications for practice and/or policyOur case study suggests that timeline gaps (ie, disengaged behaviour identified by off‐screen students, pauses in discourse and lulls in environment actions) are particularly important for identifying inflection points and formulating formative feedback.Human‐AI collaboration exists on a dynamic spectrum and requires varying degrees of human control and AI automation depending on the context of the learning task and students' work in the environment.Our analysis of this human‐AI collaboration using a multimodal timeline can be extended in the future to support students and teachers in additional ways, for example, designing pedagogical agents that interact directly with students, developing intervention and reflection tools for teachers, helping teachers craft daily lesson plans and aiding teachers and administrators in designing curricula.more » « less
-
Abstract In this study, support for teaching data literacy in social studies is provided through the design of a pedagogical support system informed by participatory design sessions with both pre‐service and in‐service social studies teachers. It provides instruction on teaching and learning data literacy in social studies, examples of standards‐based lesson plans, made‐to‐purpose data visualization tools and minimal manuals that put existing online tools in a social studies context. Based on case studies of eleven practicing teachers, this study provides insight into features of technology resources that social studies teachers find usable and useful for using data visualizations as part of standards‐ and inquiry‐based social studies instruction, teaching critical analysis of data visualizations and helping students create data visualizations with online computing tools. The final result, though, is that few of our participating teachers have yet adopted the provided resources into their own classrooms, which highlights weaknesses of the technology acceptance model for describing teacher adoption. Practitioner notesWhat is already known about this topicData literacy is an important part of social studies education in the United States.Most teachers do not teach data literacy as a part of social studies.Teachers may adopt technology to help them teach data literacy if they think it is useful and usable.What this paper addsEducational technology can help teachers learn about data literacy in social studies.Social studies teachers want simple tools that fit with their existing curricula, give them new project ideas and help students learn difficult concepts.Making tools useful and usable does not predict adoption; context plays a large role in a social studies teachers' adoption.Implications for practice and/or policyDesigning purpose‐built tools for social studies teachers will encourage them to teach data literacy in their classes.Professional learning opportunities for teachers around data literacy should include opportunities for experimentation with tools.Teachers are not likely to use tools if they are not accompanied by lesson and project ideas.more » « less
-
While offering the potential to support learning interactions, emerging AI applications like Large Language Models (LLMs) come with ethical concerns. Grounding technology design in human values can address AI ethics and ensure adoption. To this end, we apply Value‐Sensitive Design—involving empirical, conceptual and technical investigations—to centre human values in the development and evaluation of LLM‐based chatbots within a high school environmental science curriculum. Representing multiple perspectives and expertise, the chatbots help students refine their causal models of climate change's impact on local marine ecosystems, communities and individuals. We first perform an empirical investigation leveraging participatory design to explore the values that motivate students and educators to engage with the chatbots. Then, we conceptualize the values that emerge from the empirical investigation by grounding them in research in ethical AI design, human values, human‐AI interactions and environmental education. Findings illuminate considerations for the chatbots to support students' identity development, well‐being, human–chatbot relationships and environmental sustainability. We further map the values onto design principles and illustrate how these principles can guide the development and evaluation of the chatbots. Our research demonstrates how to conduct contextual, value‐sensitive inquiries of emergent AI technologies in educational settings. Practitioner notesWhat is already known about this topicGenerative artificial intelligence (GenAI) technologies like Large Language Models (LLMs) can not only support learning, but also raise ethical concerns such as transparency, trust and accountability.Value‐sensitive design (VSD) presents a systematic approach to centring human values in technology design.What this paper addsWe apply VSD to design LLM‐based chatbots in environmental education and identify values central to supporting students' learning.We map the values emerging from the VSD investigations to several stages of GenAI technology development: conceptualization, development and evaluation.Implications for practice and/or policyIdentity development, well‐being, human–AI relationships and environmental sustainability are key values for designing LLM‐based chatbots in environmental education.Using educational stakeholders' values to generate design principles and evaluation metrics for learning technologies can promote technology adoption and engagement.more » « less
An official website of the United States government
