skip to main content


Search for: All records

Award ID contains: 1720646

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Determining the most appropriate method of scoring an assessment is based on multiple factors, including the intended use of results, the assessment's purpose, and time constraints. Both the dichotomous and partial credit models have their advantages, yet direct comparisons of assessment outcomes from each method are not typical with constructed response items. The present study compared the impact of both scoring methods on the internal structure and consequential validity of a middle‐grades problem‐solving assessment called the problem solving measure for grade six (PSM6). After being scored both ways, Rasch dichotomous and partial credit analyses indicated similarly strong psychometric findings across models. Student outcome measures on the PSM6, scored both dichotomously and with partial credit, demonstrated strong, positive, significant correlation. Similar demographic patterns were noted regardless of scoring method. Both scoring methods produced similar results, suggesting that either would be appropriate to use with the PSM6.

     
    more » « less
  2. Abstract

    Problem solving is a central focus of mathematics teaching and learning. If teachers are expected to support students' problem‐solving development, then it reasons that teachers should also be able to solve problems aligned to grade level content standards. The purpose of this validation study is twofold: (1) to present evidence supporting the use of the Problem Solving Measures Grades 3–5 with preservice teachers (PSTs), and (2) to examine PSTs' abilities to solve problems aligned to grades 3–5 academic content standards. This study used Rasch measurement techniques to support psychometric analysis of the Problem Solving Measures when used with PSTs. Results indicate the Problem Solving Measures are appropriate for use with PSTs, and PSTs' performance on the Problem Solving Measures differed between first‐year PSTs and end‐of‐program PSTs. Implications include program evaluation and the potential benefits of using K‐12 student‐level assessments as measures of PSTs' content knowledge.

     
    more » « less
  3. Free, publicly-accessible full text available November 1, 2024
  4. Determining the most appropriate method of scoring an assessment is based on multiple factors, including the intended use of results, the assessment's purpose, and time constraints. Both the dichotomous and partial credit models have their advantages, yet direct comparisons of assessment outcomes from each method are not typical with constructed response items. The present study compared the impact of both scoring methods on the internal structure and consequential validity of a middle-grades problem-solving assessment called the problem solving measure for grade six (PSM6). After being scored both ways, Rasch dichotomous and partial credit analyses indicated similarly strong psychometric findings across models. Student outcome measures on the PSM6, scored both dichotomously and with partial credit, demonstrated strong, positive, significant correlation. Similar demographic patterns were noted regardless of scoring method. Both scoring methods produced similar results, suggesting that either would be appropriate to use with the PSM6. 
    more » « less
  5. Problem solving is a central focus of mathematics teaching and learning. If teachers are expected to support students' problem-solving development, then it reasons that teachers should also be able to solve problems aligned to grade level content standards. The purpose of this validation study is twofold: (1) to present evidence supporting the use of the Problem Solving Measures Grades 3–5 with preservice teachers (PSTs), and (2) to examine PSTs' abilities to solve problems aligned to grades 3–5 academic content standards. This study used Rasch measurement techniques to support psychometric analysis of the Problem Solving Measures when used with PSTs. Results indicate the Problem Solving Measures are appropriate for use with PSTs, and PSTs' performance on the Problem Solving Measures differed between first-year PSTs and end-of-program PSTs. Implications include program evaluation and the potential benefits of using K-12 student-level assessments as measures of PSTs' content knowledge. 
    more » « less
  6. Lischka, A. ; Dyer, E. ; Jones, R. ; Lovett, J. ; Strayer, J. ; Drown, S. (Ed.)
    Using a test for a purpose it was not intended for can promote misleading results and interpretations, potentially leading to negative consequences from testing (AERA et al., 2014). For example, a mathematics test designed for use with grade 7 students is likely inappropriate for use with grade 3 students. There may be cases when a test can be used with a population related to the intended one; however, validity evidence and claims must be examined. We explored the use of student measures with preservice teachers (PSTs) in a teacher-education context. The present study intends to spark a discussion about using some student measures with teachers. The Problem-solving Measures (PSMs) were developed for use with grades 3-8 students. They measure students’ problem-solving performance within the context of the Common Core State Standards for Mathematics (CCSSI, 2010; see Bostic & Sondergeld, 2015; Bostic et al., 2017; Bostic et al., 2021). After their construction, the developers wondered: If students were expected to engage successfully on the PSMs, then might future grades 3-8 teachers also demonstrate proficiency? 
    more » « less
  7. The COVID-19 pandemic disrupted many school accountability systems that rely on student-level achievement data. Many states encountered uncertainty about how to meet federal accountability requirements without typical school data. Prior research provides evidence that student achievement is correlated to students’ social background, which raises concerns about the predictive bias of accountability systems. This mixed-methods study (a) examines the predictive ability of non-achievement-based variables (i.e., students’ social background) on school districts’ report card letter grade in Ohio, and (b) explores educators’ perceptions of report card grades. Results suggest that social background and community demographic variables have a significant impact on measures of school accountability. 
    more » « less
  8. Lischka, A ; Dyer, E. ; Lovett, J. Strayer ; Drown, S. (Ed.)
    Using a test for a purpose it was not intended for can promote misleading results and interpretations, potentially leading to negative consequences from testing (AERA et al., 2014). For example, a mathematics test designed for use with grade 7 students is likely inappropriate for use with grade 3 students. There may be cases when a test can be used with a population related to the intended one; however, validity evidence and claims must be examined. We explored the use of student measures with preservice teachers (PSTs) in a teacher-education context. The present study intends to spark a discussion about using some student measures with teachers. The Problem-solving Measures (PSMs) were developed for use with grades 3-8 students. They measure students’ problem-solving performance within the context of the Common Core State Standards for Mathematics (CCSSI, 2010; see Bostic & Sondergeld, 2015; Bostic et al., 2017; Bostic et al., 2021). After their construction, the developers wondered: If students were expected to engage successfully on the PSMs, then might future grades 3-8 teachers also demonstrate proficiency? 
    more » « less
  9. Lischka, A ; Dyer, E. ; Jones, E. ; Lovett, J. ; Strayer, J. ; Drown, S. (Ed.)
    Using a test for a purpose it was not intended for can promote misleading results and interpretations, potentially leading to negative consequences from testing (AERA et al., 2014). For example, a mathematics test designed for use with grade 7 students is likely inappropriate for use with grade 3 students. There may be cases when a test can be used with a population related to the intended one; however, validity evidence and claims must be examined. We explored the use of student measures with preservice teachers (PSTs) in a teacher-education context. The present study intends to spark a discussion about using some student measures with teachers. The Problem-solving Measures (PSMs) were developed for use with grades 3-8 students. They measure students’ problem-solving performance within the context of the Common Core State Standards for Mathematics (CCSSI, 2010; see Bostic & Sondergeld, 2015; Bostic et al., 2017; Bostic et al., 2021). After their construction, the developers wondered: If students were expected to engage successfully on the PSMs, then might future grades 3-8 teachers also demonstrate proficiency? 
    more » « less
  10. This Research Commentary addresses the need for an instrument abstract—termed an Interpretation and Use Statement (IUS)—to be included when mathematics educators present instruments for use by others in journal articles and other communication venues (e.g., websites and administration manuals). We begin with presenting the need for IUSs, including the importance of a focus on interpretation and use. We then propose a set of elements—identified by a group of mathematics education researchers, instrument developers, and psychometricians—to be included in the IUS. We describe the development process, the recommended elements for inclusion, and two example IUSs. Last, we present why IUSs have the potential to benefit end users and the field of mathematics education. 
    more » « less