skip to main content


Title: Formative Fugues: Reconceptualizing Formative Feedback for Complex Systems Learning Environments
The Next Generation Science Standards and the National Research Council recognize systems thinking as an essential skill to address the global challenges of the 21st century. But the habits of mind needed to understand complex systems are not readily learned through traditional approaches. Recently large-scale interactive multi-user immersive simulations are being used to expose the learners to diverse topics that emulate real-world complex systems phenomena. These modern-day mixed reality simulations are unique in that the learners are an integral part of the evolving dynamics. The decisions they make and the actions that follow, collectively impact the simulated complex system, much like any real-world complex system. But the learners have difficulty understanding these coupled complex systems processes, and often get “lost” or “stuck,” and need help navigating the problem space. Formative feedback is the traditional way educators support learners during problem solving. Traditional goal-based and learner-centered approaches don’t scale well to environments that allow learners to explore multiple goals or solutions, and multiple solution paths (Mallavarapu & Lyons, 2020). In this work, we reconceptualize formative feedback for complex systems-based learning environments, formative fugues, (a term derived from music by Reitman, 1964) to allow learners to make informed decisions about their own exploration paths. We discuss a novel computational approach that employs causal inference and pattern matching to characterize the exploration paths of prior learners and generate situationally relevant formative feedback. We extract formative fugues from the data collected from an ecological complex systems simulation installed at a museum. The extracted feedback does not presume the goals of the learners, but helps the learners understand what choices and events led to the current state of the problem space, and what paths forward are possible. We conclude with a discussion of implications of using formative fugues for complex systems education.  more » « less
Award ID(s):
1822864
NSF-PAR ID:
10355146
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
International journal of complexity in education
Volume:
2
Issue:
2
ISSN:
2643-4717
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Anna N. Rafferty, Jacob Whitehill (Ed.)
    Learners are being exposed to abstract skills like innovation, creativity and reasoning through collaborative open-ended problems. Most of these problems, like their real-world counterparts, have no definite starting or ending point, and have no fixed strategies to solve them. To help the learners explore the multiple perspectives of the problem solutions there is an urgent need for designing formative feedback in these environments. Unfortunately, there are barriers to us- ing existing EDM approaches to provide formative feedback to learners in these environments: (1) due to the vast so- lution space, and the lack of verifiability of the solutions it is impossible to create task and expert models, thus mak- ing the detection of the learners progress impractical; (2) formative feedback based on individual learner models does not scale well when many learners are collaborating to solve the same problem. In this work, we redefine formative feed- back as reshaping the learning environment and learners’ exploration paths by exposing/enlisting “fugues” as defined by Reitman [28]. Through a case study approach we, (1) val- idate methods to extract learners’ “fugues” from a collabora- tive open-ended museum exhibit, (2) design formative feed- back for learners and educators using these extracted fugues in real-time, (3) evaluate the impact of exposing fugues to group of learners interacting with the exhibit. 
    more » « less
  2. This research explores a novel human-in-the-loop approach that goes beyond traditional prompt engineering approaches to harness Large Language Models (LLMs) with chain-of-thought prompting for grading middle school students’ short answer formative assessments in science and generating useful feedback. While recent efforts have successfully applied LLMs and generative AI to automatically grade assignments in secondary classrooms, the focus has primarily been on providing scores for mathematical and programming problems with little work targeting the generation of actionable insight from the student responses. This paper addresses these limitations by exploring a human-in-the-loop approach to make the process more intuitive and more effective. By incorporating the expertise of educators, this approach seeks to bridge the gap between automated assessment and meaningful educational support in the context of science education for middle school students. We have conducted a preliminary user study, which suggests that (1) co-created models improve the performance of formative feedback generation, and (2) educator insight can be integrated at multiple steps in the process to inform what goes into the model and what comes out. Our findings suggest that in-context learning and human-in-the-loop approaches may provide a scalable approach to automated grading, where the performance of the automated LLM-based grader continually improves over time, while also providing actionable feedback that can support students’ open-ended science learning. 
    more » « less
  3. null (Ed.)
    Unlike summative assessment that is aimed at grading students at the end of a unit or academic term, formative assessment is assess- ment for learning, aimed at monitoring ongoing student learning to provide feedback to both student and teacher, so that learning gaps can be addressed during the learning process. Education research points to formative assessment as a crucial vehicle for improving student learning. Formative assessment in K-12 CS and program- ming classrooms remains a crucial unaddressed need. Given that assessment for learning is closely tied to teacher pedagogical con- tent knowledge, formative assessment literacy needs to also be a topic of CS teacher PD. This position paper addresses the broad need to understand formative assessment and build a framework to understand the what, why, and how of formative assessment of introductory programming in K-12 CS. It shares specific pro- gramming examples to articulate the cycle of formative assessment, diagnostic evaluation, feedback, and action. The design of formative assessment items is informed by CS research on assessment design, albeit related largely to summative assessment and in CS1 contexts, and learning of programming, especially student misconceptions. It describes what teacher formative assessment literacy PD should entail and how to catalyze assessment-focused collaboration among K-12 CS teachers through assessment platforms and repositories. 
    more » « less
  4. null (Ed.)
    This case study investigated preservice secondary mathematics teachers’ (PSMTs’) developing understanding of formative assessment as a way to leverage mathematical practices for student learning. Qualitative data came from PSMTs’ responses to a learning sequence designed to help them develop nuanced and disciplinary-specific understandings of formative assessment. Analyses suggest a trajectory for PSMTs learning to use formative assessment in problem-solving environments, but also shows how the orientation of one PSMT continued to obstruct her progress. The results suggest that mathematics teacher educators must themselves use formative assessment that supports PMSTs’ developing orientations to currently suggested ways of teaching. 
    more » « less
  5. With growing interest in supporting the development of computational thinking (CT) in early childhood, there is also need for new assessments that serve multiple purposes and uses. In particular, there is a need to understand the design of formative assessments that can be used during classroom instruction to provide feedback to teachers and children in real-time. In this paper, we report on an empirical study and advance a new unit of observational analysis for formative assessment that we call an indicator of a knowledge refinement opportunity or as a shorthand , KRO indicators . We put forth a new framework for conceptualizing the design of formative assessments that builds on the Evidence Centered Design framework but centers identification and analysis of indicators of knowledge refinement opportunities. We illustrate a number of key indicators through empirical examples drawn from video recordings of Kindergarten classroom lessons. 
    more » « less