Title: Creating a coupled multiple response assessment for modeling in lab courses
Research-based assessment instruments (RBAIs) are essential tools to measure aspects of student learning and improve pedagogical practice. RBAIs are designed to measure constructs related to a well-defined learning goal. However, relatively few RBAIs exist that are suitable for the specific learning goals of upper-division physics lab courses. One such learning goal is modeling, the process of constructing, testing, and refining models of physical and measurement systems. Here, we describe the creation of one component of an RBAI to measure proficiency with modeling. The RBAI is called the Modeling Assessment for Physics Laboratory Experiments (MAPLE). For use with large numbers of students, MAPLE must be scalable, which includes not requiring impractical amounts of labor to analyze its data as is often the case with large free-response assessments. We, therefore, use the coupled multiple response (CMR) format, from which data can be analyzed by a computer, to create items for measuring student reasoning in this component of MAPLE.We describe the process we used to create a set of CMR items for MAPLE, provide an example of this process for an item, and lay out an argument for construct validity of the resulting items based on our process. more »« less
Fox, Michael F.; Pollard, Benjamin; Ríos, Laura; Lewandowski, H. J.
(, 2020 Physics Education Research Conference Proceedings)
null
(Ed.)
A choose-your-own-adventure online assessment has been developed to measure the process of modeling undertaken by students when asked to measure the Earth's gravitational constant, g, using a simple pendulum. This activity forms part of the Modeling Assessment for Physics Laboratory Experiments (MAPLE), which is being developed to assess upper-division students' proficiency in modeling. The role of the pendulum activity is to serve as a pre-test assessment with apparatus that students are likely to be familiar. Using an initial sample of student data from a development phase of the assessment, we show that the pendulum activity is able to discriminate between a range of student processes that are relevant to understanding student engagement with modeling as a scientific tool.
Wilcox, Bethany R.; Rainey, Katherine D.; Vignal, Michael
(, Physics Education Research Conference Proceedings)
Recent years have seen a movement within the research-based assessment development community towards item formats that go beyond simple multiple-choice formats. Some have moved towards free-response questions, particularly at the upper-division level; however, free-response items have the constraint that they must be scored by hand. To avoid this limitation, some assessment developers have moved toward formats that maintain the closed-response format, while still providing more nuanced insight into student reasoning. One such format is known as coupled, multiple response (CMR). This format pairs multiple-choice and multiple-response formats to allow students to both commit to an answer in addition to selecting options that correspond with their reasoning. In addition to being machine-scorable, this format allows for more nuanced scoring than simple right or wrong. However, such nuanced scoring presents a potential challenge with respect to utilizing certain testing theories to construct validity arguments for the assessment. In particular, Item Response Theory (IRT) models often assume dichotomously scored items. While polytomous IRT models do exist, each brings with it certain constraints and limitations. Here, we will explore multiple IRT models and scoring schema using data from an existing CMR test, with the goal of providing guidance and insight for possible methods for simultaneously leveraging the affordances of both the CMR format and IRT models in the context of constructing validity arguments for research-based assessments.
Pollard, Benjamin; Hobbs, Robert; Dounas-Frazer, Dimitri R.; Lewandowski, H. J.
(, PERC Proceedings)
Methodological development of a new coding scheme for an established assessment on measurement uncertainty in laboratory courses written by Benjamin Pollard, Robert Hobbs, Dimitri R. Dounas-Frazer, and H. J. Lewandowski Student understanding around measurement uncertainty is an important learning outcome in physics lab courses across the US, including at the University of Coloroado Boulder (CU), where it is among the major learning outcomes for the large introductory stand-alone physics lab course. One research tool for studying student understanding around measurement uncertainty, which we use in this course, is the Physics Measurement Questionnaire (PMQ), an open-response assessment for measuring student understanding of measurement uncertainty. Interpreting and analyzing PMQ data involves coding students' written explanations to open-response questions. However, the preexisting scoring scheme for the PMQ does not fully capture the breadth and depth of reasoning contained in our students' responses. Therefore, we created a new coding scheme for the PMQ based on responses from our students. Here, we document our process to develop a new coding scheme for the PMQ, and describe the resulting codes. We also present examples of what can be learned from applying the new coding scheme at our institution. Physics Education Research Conference 2019 Part of the PER Conference series Provo, UT: July 24-25, 2019
Abstract Despite the importance of developing elementary science teachers' content knowledge for teaching (CKT), there are limited assessments that have been designed to measure the full breadth of their CKT at scale. Our overall research project addressed this gap by developing an online assessment to measure elementary preservice teachers' CKT about matter and its interactions. This study, which was part of our larger project, reports on findings from one component of the item development process examining the construct validity of 118 different CKT about matter assessment items. In this study, 86 elementary teachers participated in cognitive interviews to examine: (a) the knowledge and reasoning they used when responding to these CKT about matter assessment items and (b) the nature of the content challenges and the content teaching challenges they encountered. Findings showed that over 80% of participant interview responses indicated that the CKT about matter items functioned as hypothesized, providing evidence to support future use of these items on a large‐scale assessment and in studies of science teachers' CKT. When responding to the items, participants showed evidence of four main challenges with the science content: (a) using scientific concepts to reason about science tasks, (b) using adequate evidence to reason about science phenomenon, (c) drawing upon examples of scientific phenomena, and (d) drawing upon science vocabulary. Findings also showed that participants experienced challenges regarding the following content teaching aspects when responding to these items: (a) connecting to key scientific concepts involved in the work of teaching science, (b) attending to instructional goal(s), and (c) recognizing features of grade‐level appropriateness. Implications for using CKT items as part of large‐scale science assessment systems and identifying areas to target in elementary science teachers' CKT development are addressed.
Wiebe, Eric; London, Jennifer; Aksit, Osman; Mott, Bradford W.; Boyer, Kristy Elizabeth; Lester, James C.
(, Proceedings of the 50th ACM Technical Symposium on Computer Science Education)
The recognition of middle grades as a critical juncture in CS education has led to the widespread development of CS curricula and integration efforts. The goal of many of these interventions is to develop a set of underlying abilities that has been termed computational thinking (CT). This goal presents a key challenge for assessing student learning: we must identify assessment items associated with an emergent understanding of key cognitive abilities underlying CT that avoid specialized knowledge of specific programming languages. In this work we explore the psychometric properties of assessment items appropriate for use with middle grades (US grades 6-8; ages 11-13) students. We also investigate whether these items measure a single ability dimension. Finally, we strive to recommend a "lean" set of items that can be completed in a single 50-minute class period and have high face validity. The paper makes the following contributions: 1) adds to the literature related to the emerging construct of CT, and its relationship to the existing CTt and Bebras instruments, and 2) offers a research-based CT assessment instrument for use by both researchers and educators in the field.
Pollard, Benjamin, Fox, Michael F., Ríos, Laura, and Lewandowski, H. J. Creating a coupled multiple response assessment for modeling in lab courses. Retrieved from https://par.nsf.gov/biblio/10233193. 2020 Physics Education Research Conference Proceedings . Web. doi:10.1119/perc.2020.pr.pollard.
Pollard, Benjamin, Fox, Michael F., Ríos, Laura, & Lewandowski, H. J. Creating a coupled multiple response assessment for modeling in lab courses. 2020 Physics Education Research Conference Proceedings, (). Retrieved from https://par.nsf.gov/biblio/10233193. https://doi.org/10.1119/perc.2020.pr.pollard
Pollard, Benjamin, Fox, Michael F., Ríos, Laura, and Lewandowski, H. J.
"Creating a coupled multiple response assessment for modeling in lab courses". 2020 Physics Education Research Conference Proceedings (). Country unknown/Code not available. https://doi.org/10.1119/perc.2020.pr.pollard.https://par.nsf.gov/biblio/10233193.
@article{osti_10233193,
place = {Country unknown/Code not available},
title = {Creating a coupled multiple response assessment for modeling in lab courses},
url = {https://par.nsf.gov/biblio/10233193},
DOI = {10.1119/perc.2020.pr.pollard},
abstractNote = {Research-based assessment instruments (RBAIs) are essential tools to measure aspects of student learning and improve pedagogical practice. RBAIs are designed to measure constructs related to a well-defined learning goal. However, relatively few RBAIs exist that are suitable for the specific learning goals of upper-division physics lab courses. One such learning goal is modeling, the process of constructing, testing, and refining models of physical and measurement systems. Here, we describe the creation of one component of an RBAI to measure proficiency with modeling. The RBAI is called the Modeling Assessment for Physics Laboratory Experiments (MAPLE). For use with large numbers of students, MAPLE must be scalable, which includes not requiring impractical amounts of labor to analyze its data as is often the case with large free-response assessments. We, therefore, use the coupled multiple response (CMR) format, from which data can be analyzed by a computer, to create items for measuring student reasoning in this component of MAPLE.We describe the process we used to create a set of CMR items for MAPLE, provide an example of this process for an item, and lay out an argument for construct validity of the resulting items based on our process.},
journal = {2020 Physics Education Research Conference Proceedings},
author = {Pollard, Benjamin and Fox, Michael F. and Ríos, Laura and Lewandowski, H. J.},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.