skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Assessment for Learning: Helping Children Build on What They Know
Efforts to improve instruction frequently focus on fostering meaningful learning—learning based on conceptual understanding—as opposed to knowledge memorized by rote. Consistent with Dewey’s (1963) principle of interaction, fostering meaningful learning entails identifying what children already know and do not know and building on the former to learn (moderately) new knowledge (Claessens & Engel, 2013; Fyfe et al., 2012; Piaget, 1964; Vygotsky, 1978). A learning trajectory (LT) approach to instruction—which includes conceptually and research-based and goals, a research-based learning progression of successive developmental levels, and research-based teaching activities to promote each level—epitomizes such an effort (Clements & Sarama, 2008; Confrey et al., 2012). Formative, classroom-based assessment—ongoing assessment to guide and monitor student learning (Black et al., 2003; Cizek, 2010; Author, 2018a)—is an integral aspect of the LT approach (Daro et al., 2011). In contrast to more commonly used summative assessment strategy (e.g., a unit test given at the end of an instruction unit to assess whether unit content has been mastered and grade progress), formative assessment serves to identify what developmental level a child has already achieved and the next developmentally appropriate level on which instruction should begin (Author, 2018a). Moreover, children are regularly assessed during instruction to gauge whether they–individually or collectively–have mastered a developmental level before instruction proceeds with the next higher level. In sum, “the LT approach involves using formative assessment (National Mathematics Advisory Panel, 2008; Shepard et al., 2018) to provide instructional activities aligned with empirically validated developmental progressions (Fantuzzo, Gadsden, & McDermott, 2011). Although research has shown that LT-based instruction is more efficacious, research is needed to evaluate the add-on value of the formative assessment components of LT-based instruction on student outcomes and the professional development of teachers. This presentation will highlight future lines of research that would provide insight into underlying theory and more productive strategies. Because LTs “need to be supplemented with consideration of obstacles that the student must overcome,” much needs to be learned about the obstacles posed by the content itself, instructional materials, and teachers (Ginsburg, 2009).  more » « less
Award ID(s):
2201039
PAR ID:
10445530
Author(s) / Creator(s):
;
Editor(s):
Wiebe, E. N.; Harris, C. J.; Grover, S. 
Date Published:
Journal Name:
American Educational Research Association Annual Meeting
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Olanoff, D; Johnson, K.; Spitzer, S (Ed.)
    Administrators, educators, and stakeholders have faced the dilemma of determining the most effective type of data for informing instruction for quite some time (Pella, 2015). While the type of standardized assessment a teacher gives during instruction is often set at the district or state level, teachers often have autonomy in the formative and summative assessments that serve as the day-to-day tools in assessing a student’s progress (Abrams et al., 2016). Choices about in-class assessment and instruction are building blocks towards a student’s success on standardized assessments. The purpose of this phenomenological qualitative study is to explore how 4th-8th grade math teachers’ preparation and instructional practices are influenced by the types of assessments administered to their students in one school. Research questions are as follows: (a) How do 4th-8th grade math teachers describe the math assessments they use? (b) How do 4th-8th grade math teachers adjust their instructional practices as a result of their students completing formative, summative, and standardized math assessments? 
    more » « less
  2. Abstract In response to Li, Reigh, He, and Miller's commentary,Can we and should we use artificial intelligence for formative assessment in science, we argue that artificial intelligence (AI) is already being widely employed in formative assessment across various educational contexts. While agreeing with Li et al.'s call for further studies on equity issues related to AI, we emphasize the need for science educators to adapt to the AI revolution that has outpaced the research community. We challenge the somewhat restrictive view of formative assessment presented by Li et al., highlighting the significant contributions of AI in providing formative feedback to students, assisting teachers in assessment practices, and aiding in instructional decisions. We contend that AI‐generated scores should not be equated with the entirety of formative assessment practice; no single assessment tool can capture all aspects of student thinking and backgrounds. We address concerns raised by Li et al. regarding AI bias and emphasize the importance of empirical testing and evidence‐based arguments in referring to bias. We assert that AI‐based formative assessment does not necessarily lead to inequity and can, in fact, contribute to more equitable educational experiences. Furthermore, we discuss how AI can facilitate the diversification of representational modalities in assessment practices and highlight the potential benefits of AI in saving teachers’ time and providing them with valuable assessment information. We call for a shift in perspective, from viewing AI as a problem to be solved to recognizing its potential as a collaborative tool in education. We emphasize the need for future research to focus on the effective integration of AI in classrooms, teacher education, and the development of AI systems that can adapt to diverse teaching and learning contexts. We conclude by underlining the importance of addressing AI bias, understanding its implications, and developing guidelines for best practices in AI‐based formative assessment. 
    more » « less
  3. Rajala, a; Cortez, A; Hofmann, A; Jornet, A; Lotz-Sisitka, H; Markauskaite, M (Ed.)
    Computational modeling of scientific systems is a powerful approach for fostering science and computational thinking (CT) proficiencies. However, the role of programming activities for this synergistic learning remains unclear. This paper examines alternative ways to engage with computational models (CM) beyond programming. Students participated in an integrated Science, Engineering, and Computational Modeling unit through one of three distinct instructional versions: Construct a CM, Interpret-and-Evaluate a CM, and Explore-and-Evaluate a simulation. Analyzing 188 student responses to a science+CT embedded assessment task, we investigate how science proficiency and instructional versions related to pseudocode interpretation and debugging performances. We found that students in the Explore-and-Evaluate a simulation outperformed students in the programming-based versions on the CT assessment items. Additionally, science proficiency strongly predicted students’ CT performance, unlike prior programming experience. These results highlight the promise of diverse approaches for fostering CT practices with implications for STEM+C instruction and assessment design. 
    more » « less
  4. Computational modeling of scientific systems is a powerful approach for fostering science and computational thinking (CT) proficiencies. However, the role of programming activities for this synergistic learning remains unclear. This paper examines alternative ways to engage with computational models (CM) beyond programming. Students participated in an integrated Science, Engineering, and Computational Modeling unit through one of three distinct instructional versions: Construct a CM, Interpret-and-Evaluate a CM, and Explore-and-Evaluate a simulation. Analyzing 188 student responses to a science+CT embedded assessment task, we investigate how science proficiency and instructional versions related to pseudocode interpretation and debugging performances. We found that students in the Explore-and-Evaluate a simulation outperformed students in the programming-based versions on the CT assessment items. Additionally, science proficiency strongly predicted students’ CT performance, unlike prior programming experience. These results highlight the promise of diverse approaches for fostering CT practices with implications for STEM+C instruction and assessment design. 
    more » « less
  5. In this work-in-progress paper, we continue investigation into the propagation of the Concept Warehouse within mechanical engineering (Friedrichsen et al., 2017; Koretsky et al., 2019a). Even before the pandemic forced most instruction online, educational technology was a growing element in classroom culture (Koretsky & Magana, 2019b). However, adoption of technology tools for widespread use is often conceived from a turn-key lens, with professional development focused on procedural competencies and fidelity of implementation as the goal (Mills & Ragan, 2000; O’Donnell, 2008). Educators are given the tool with initial operating instructions, then left on their own to implement it in particular instructional contexts. There is little emphasis on the inevitable instructional decisions around incorporating the tool (Hodge, 2019) or on sustainable incorporation of technologies into existing instructional practice (Forkosh-Baruch et al., 2021). We consider the take-up of a technology tool as an emergent, rather than a prescribed process (Henderson et al., 2011). In this WIP paper, we examine how two instructors who we call Al and Joe reason through their adoption of a technology tool, focusing on interactions among instructors, tool, and students within and across contexts. The Concept Warehouse (CW) is a widely-available, web-based, open educational technology tool used to facilitate concept-based active learning in different contexts (Friedrichsen et al., 2017; Koretsky et al., 2014). Development of the CW is ongoing and collaboration-driven, where user-instructors from different institutions and disciplines can develop conceptual questions (called ConcepTests) and other learning and assessment tools that can be shared with other users. Currently there are around 3,500 ConcepTests, 1,500 faculty users, and 36,000 student users. About 700 ConcepTests have been developed for mechanics (statics and dynamics). The tool’s spectrum of affordances allows different entry points for instructor engagement, but also allows their use to grow and change as they become familiar with the tool and take up ideas from the contexts around them. Part of a larger study of propagation and use across five diverse institutions (Nolen & Koretsky, 2020), instructors were introduced to the tool, offered an introductory workshop and opportunity to participate in a community of practice (CoP), then interviewed early and later in their adoption. For this paper, we explore a bounded case study of the two instructors, Al and Joe, who took up the CW to teach Introductory Statics. Al and Joe were experienced instructors, committed to active learning, who presented examples from their ongoing adaptation of the tool for discussion in the community of practice. However, their decisions about how to integrate the tool fundamentally differed, including the aspects of the tool they took up and the ways they made sense of their use. In analyzing these two cases, we begin to uncover how these instructors navigated the dynamic nature of pedagogical decision making in and across contexts. 
    more » « less