skip to main content


Search for: All records

Creators/Authors contains: "Krajcik, Joseph"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We discuss transforming STEM education using three aspects: learning progressions (LPs), constructed response performance assessments, and artificial intelligence (AI). Using LPs to inform instruction, curriculum, and assessment design helps foster students’ ability to apply content and practices to explain phenomena, which reflects deeper science understanding. To measure the progress along these LPs, performance assessments combining elements of disciplinary ideas, crosscutting concepts and practices are needed. However, these tasks are time-consuming and expensive to score and provide feedback for. Artificial intelligence (AI) allows to validate the LPs and evaluate performance assessments for many students quickly and efficiently. The evaluation provides a report describing student progress along LP and the supports needed to attain a higher LP level. We suggest using unsupervised, semi-supervised ML and generative AI (GAI) at early LP validation stages to identify relevant proficiency patterns and start building an LP. We further suggest employing supervised ML and GAI for developing targeted LP-aligned performance assessment for more accurate performance diagnosis at advanced LP validation stages. Finally, we discuss employing AI for designing automatic feedback systems for providing personalized feedback to students and helping teachers implement LP-based learning. We discuss the challenges of realizing these tasks and propose future research avenues.

     
    more » « less
  2. The Framework for K-12 Science Education recognizes modeling as an essential practice for building deep understanding of science. Modeling assessments should measure the ability to integrate Disciplinary Core Ideas and Crosscutting Concepts. Machine learning (ML) has been utilized to score and provide feedback on open-ended Learning Progression (LP)-aligned assessments. Analytic rubrics have been shown to be easier to evaluate the validity of ML-based scores. A possible drawback of using analytic rubrics is the potential for oversimplification of integrated ideas. We demonstrate the deconstruction of a 3D holistic rubric for modeling assessments aligned LP for Physical Science. We describe deconstructing this rubric into analytic categories for ML training and to preserve its 3D nature. 
    more » « less
    Free, publicly-accessible full text available April 11, 2025
  3. Abstract

    Global science education reform calls for developing student knowledge-in-use that applies the integrated knowledge of core ideas and scientific practices to make sense of phenomena or solve problems. Knowledge-in-use development requires a long-term, standards-aligned, coherent learning system, including curriculum and instruction, assessment, and professional learning. This paper addresses the challenge of transforming standards into classrooms for knowledge-in-use and presents an iterative design process for developing a coherent and standards-aligned learning system. Using a project-based learning approach, we present a theory-driven, empirically validated learning system aligned with the U.S. science standards, consisting of four consecutive curriculum and instruction materials, assessments, and professional learning to support students’ knowledge-in-use in high school chemistry. We also present the iterative development and testing process with empirical evidence to support the effectiveness of our learning system in a five-year NSF-funded research project. This paper discusses the theoretical perspectives of developing an NGSS-aligned, coherent, and effective learning system and recaps the development and testing process by unpacking all essential components in our learning system. We conclude that our theory-driven and empirically validated learning system would inform high school teachers and researchers across countries in transforming their local science standards into curriculum materials to support students’ knowledge-in-use development.

     
    more » « less
  4. Abstract

    One reason for the widespread use of the energy concept across the sciences is that energy analysis can be used to interpret the behavior of systems even if one does not know the particular mechanisms that underlie the observed behavior. By providing an approach to interpreting unfamiliar phenomena, energy provides a lens on phenomena that can set the stage for deeper learning about how and why phenomena occur. However, not all energy ideas are equally productive in setting the stage for new learning. In particular, researchers have debated the value of teaching students to interpret phenomena in terms of energy forms and transformations. In this study, we investigated how two different approaches to middle school energy instruction—one emphasizing energy transformations between forms and one emphasizing energy transfers between systems—prepared students to use their existing energy knowledge to engage in new learning about a novel energy‐related phenomenon. To do this, we designed a new assessment instrument to elicit student initial ideas about the phenomenon and to compare how effectively students from each approach learned from authentic learning resources. Our results indicate that students who learned to interpret phenomenon in terms of energy transfers between systems learned more effectively from available learning resources than did students who learned to interpret phenomena in terms of energy forms and transformations. This study informs the design of introductory energy instruction and approaches for assessing how students existing knowledge guides new learning about phenomena.

     
    more » « less
  5. Abstract

    Understanding the world around us is a growing necessity for the whole public, as citizens are required to make informed decisions in their everyday lives about complex issues. Systems thinking (ST) is a promising approach for developing solutions to various problems that society faces and has been acknowledged as a crosscutting concept that should be integrated across educational science disciplines. However, studies show that engaging students in ST is challenging, especially concerning aspects like change over time and feedback. Using computational system models and a system dynamics approach can support students in overcoming these challenges when making sense of complex phenomena. In this paper, we describe an empirical study that examines how 10th grade students engage in aspects of ST through computational system modeling as part of a Next Generation Science Standards-aligned project-based learning unit on chemical kinetics. We show students’ increased capacity to explain the underlying mechanism of the phenomenon in terms of change over time that goes beyond linear causal relationships. However, student models and their accompanying explanations were limited in scope as students did not address feedback mechanisms as part of their modeling and explanations. In addition, we describe specific challenges students encountered when evaluating and revising models. In particular, we show epistemological barriers to fruitful use of real-world data for model revision. Our findings provide insights into the opportunities of a system dynamics approach and the challenges that remain in supporting students to make sense of complex phenomena and nonlinear mechanisms.

     
    more » « less
  6. Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learn- ing (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student- drawn models and their written descriptions of those models. We developed six modeling assessment tasks for middle school students that integrate disciplinary core ideas and crosscutting concepts with the modeling practice. For each task, we asked students to draw a model and write a description of that model, which gave students with diverse backgrounds an opportunity to represent their understanding in multiple ways. We then collected student responses to the six tasks and had human experts score a subset of those responses. We used the human-scored student responses to develop ML algorithmic models (AMs) and to train the computer. Validation using new data suggests that the machine-assigned scores achieved robust agreements with human consent scores. Qualitative analysis of student-drawn models further revealed five characteristics that might impact machine scoring accuracy: Alternative expression, confusing label, inconsistent size, inconsistent position, and redundant information. We argue that these five characteristics should be considered when developing machine-scorable modeling tasks. 
    more » « less
  7. Abstract

    We face complex global issues such as climate change that challenge our ability as humans to manage them. Models have been used as a pivotal science and engineering tool to investigate, represent, explain, and predict phenomena or solve problems that involve multi-faceted systems across many fields. To fully explain complex phenomena or solve problems using models requires both systems thinking (ST) and computational thinking (CT). This study proposes a theoretical framework that uses modeling as a way to integrate ST and CT. We developed a framework to guide the complex process of developing curriculum, learning tools, support strategies, and assessments for engaging learners in ST and CT in the context of modeling. The framework includes essential aspects of ST and CT based on selected literature, and illustrates how each modeling practice draws upon aspects of both ST and CT to support explaining phenomena and solving problems. We use computational models to show how these ST and CT aspects are manifested in modeling.

     
    more » « less
  8. Abstract

    Developing and using models to make sense of phenomena or to design solutions to problems is a key science and engineering practice. Classroom use of technology-based tools can promote the development of students’ modelling practice, systems thinking, and causal reasoning by providing opportunities to develop and use models to explore phenomena. In previous work, we presented four aspects of system modelling that emerged during our development and initial testing of an online system modelling tool. In this study, we provide an in-depth examination and detailed evidence of 10th grade students engaging in those four aspects during a classroom enactment of a system modelling unit. We look at the choices students made when constructing their models, whether they described evidence and reasoning for those choices, and whether they described the behavior of their models in connection with model usefulness in explaining and making predictions about the phenomena of interest. We conclude with a set of recommendations for designing curricular materials that leverage digital tools to facilitate the iterative constructing, using, evaluating, and revising of models.

     
    more » « less