skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Capturing modeling pathways using the Modeling Assessment for Physics Laboratory Experiments
A choose-your-own-adventure online assessment has been developed to measure the process of modeling undertaken by students when asked to measure the Earth's gravitational constant, g, using a simple pendulum. This activity forms part of the Modeling Assessment for Physics Laboratory Experiments (MAPLE), which is being developed to assess upper-division students' proficiency in modeling. The role of the pendulum activity is to serve as a pre-test assessment with apparatus that students are likely to be familiar. Using an initial sample of student data from a development phase of the assessment, we show that the pendulum activity is able to discriminate between a range of student processes that are relevant to understanding student engagement with modeling as a scientific tool.  more » « less
Award ID(s):
1734006
PAR ID:
10233194
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2020 Physics Education Research Conference Proceedings
Page Range / eLocation ID:
155 to 160
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Research-based assessment instruments (RBAIs) are essential tools to measure aspects of student learning and improve pedagogical practice. RBAIs are designed to measure constructs related to a well-defined learning goal. However, relatively few RBAIs exist that are suitable for the specific learning goals of upper-division physics lab courses. One such learning goal is modeling, the process of constructing, testing, and refining models of physical and measurement systems. Here, we describe the creation of one component of an RBAI to measure proficiency with modeling. The RBAI is called the Modeling Assessment for Physics Laboratory Experiments (MAPLE). For use with large numbers of students, MAPLE must be scalable, which includes not requiring impractical amounts of labor to analyze its data as is often the case with large free-response assessments. We, therefore, use the coupled multiple response (CMR) format, from which data can be analyzed by a computer, to create items for measuring student reasoning in this component of MAPLE.We describe the process we used to create a set of CMR items for MAPLE, provide an example of this process for an item, and lay out an argument for construct validity of the resulting items based on our process. 
    more » « less
  2. This Brief Report presents an example of assessment validation using an argument-based approach. The instrument we developed is a Brief Assessment of Students’ Mature Number Sense, which measures a central goal in mathematics education. We chose to develop this assessment to provide an efficient way to measure the effect of instructional practices designed to improve students’ number sense. Using an argument-based framework, we first identify our proposed interpretations and uses of student scores. We then outline our argument with three claims that provide evidence connecting students’ responses on the assessment with its intended uses. Finally, we highlight why using argument-based validation benefits measure developers as well as the broader mathematics education community. 
    more » « less
  3. Student procrastination and cramming for deadlines are major challenges in online learning environments, with negative educational and well-being side effects. Modeling student activities in continuous time and predicting their next study time are important problems that can help in creating personalized timely interventions to mitigate these challenges. However, previous attempts on dynamic modeling of student procrastination suffer from major issues: they are unable to predict the next activity times, cannot deal with missing activity history, are not personalized, and disregard important course properties, such as assignment deadlines, that are essential in explaining the cramming behavior. To resolve these problems, we introduce a new personalized stimuli-sensitive Hawkes process model (SSHP), by jointly modeling all student-assignment pairs and utilizing their similarities, to predict students’ next activity times even when there are no historical observations. Unlike regular point processes that assume a constant external triggering effect from the environment, we model three dynamic types of external stimuli, according to assignment availabilities, assignment deadlines, and each student’s time management habits. Our experiments on two synthetic datasets and two real-world datasets show a superior performance of future activity prediction, comparing with state-of-the-art models. Moreover, we show that our model achieves a flexible and accurate parameterization of activity intensities in students. 
    more » « less
  4. Involving students in scientific modeling practice is one of the most effective approaches to achieving the next generation science education learning goals. Given the complexity and multirepresentational features of scientific models, scoring student-developed models is time- and cost-intensive, remaining one of the most challenging assessment practices for science education. More importantly, teachers who rely on timely feedback to plan and adjust instruction are reluctant to use modeling tasks because they could not provide timely feedback to learners. This study utilized machine learn- ing (ML), the most advanced artificial intelligence (AI), to develop an approach to automatically score student- drawn models and their written descriptions of those models. We developed six modeling assessment tasks for middle school students that integrate disciplinary core ideas and crosscutting concepts with the modeling practice. For each task, we asked students to draw a model and write a description of that model, which gave students with diverse backgrounds an opportunity to represent their understanding in multiple ways. We then collected student responses to the six tasks and had human experts score a subset of those responses. We used the human-scored student responses to develop ML algorithmic models (AMs) and to train the computer. Validation using new data suggests that the machine-assigned scores achieved robust agreements with human consent scores. Qualitative analysis of student-drawn models further revealed five characteristics that might impact machine scoring accuracy: Alternative expression, confusing label, inconsistent size, inconsistent position, and redundant information. We argue that these five characteristics should be considered when developing machine-scorable modeling tasks. 
    more » « less
  5. Abstract In our interconnected world, Systems Thinking (ST) is increasingly being recognized as a key learning goal for science education to help students make sense of complex phenomena. To support students in mastering ST, educators are advocating for using computational modeling programs. However, studies suggest that students often have challenges with using ST in the context of computational modeling. While previous studies have suggested that students have challenges modeling change over time through collector and flow structures and representing iterative processes through feedback loops, most of these studies investigated student ST through pre and post tests or through interviews. As such there is a gap in the literature regarding how student ST approaches develop and change throughout a computational modeling unit. In this case study, we aimed to determine which aspects of ST students found challenging during a computational modeling unit, how their approaches to ST changed over time, and how the learning environment was supporting students with ST. Building on prior frameworks, we developed a seven-category analysis tool that enabled us to use a mixture of student discourse, writing, and screen actions to categorize seven ST behaviors in real time. Through using this semi-quantitative tool and subsequent narrative analysis, we found evidence for all seven behavior categories, but not all categories were equally represented. Meanwhile our results suggest that opportunities for students to engage in discourse with both their peers and their teacher supported them with ST. Overall, this study demonstrates how student discourse and student writing can be important evidence of ST and serve as a potential factor to evaluate ST application as part of students’ learning progression. The case study also provides evidence for the positive impact that the implementation of a social constructivist approach has in the context of constructing computational system models. 
    more » « less