skip to main content


Search for: All records

Award ID contains: 2100988

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available November 1, 2024
  2. In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID- 19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test. A two-way ANOVA showed that fourth- grade MAP mathematics scores were statistically similar for the 2019 pre-COVID cohort (n = 31) and 2020 COVID-19 cohort (n = 82), and by gender group, between Fall 2019 and Fall 2020. Implications for rural students’ academic performance in virtual learning environments are discussed. 
    more » « less
  3. The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature. An exploratory sequential design is illustrated as an additional approach using a Delphi technique in Phase I and Rasch DIF analyses in Phase II. We introduce the 2 × 2 Concordance Integration Typology as a systematic way to examine agreement and disagreement across the qualitative and quantitative findings using a concordance joint display table. A worked example from the development of the Problem-Solving Measures Grades 6–8 Computer Adaptive Tests supported using an exploratory sequential design to inform item refinement. The 2 × 2 Concordance Integration Typology (a) crystallized instances where additional refinements were potentially needed and (b) provided for evaluating the distribution of bias across the set of items as a whole. Implications are discussed for advancing data integration techniques and using mixed methods to improve instrument development. 
    more » « less
  4. Determining the most appropriate method of scoring an assessment is based on multiple factors, including the intended use of results, the assessment's purpose, and time constraints. Both the dichotomous and partial credit models have their advantages, yet direct comparisons of assessment outcomes from each method are not typical with constructed response items. The present study compared the impact of both scoring methods on the internal structure and consequential validity of a middle-grades problem-solving assessment called the problem solving measure for grade six (PSM6). After being scored both ways, Rasch dichotomous and partial credit analyses indicated similarly strong psychometric findings across models. Student outcome measures on the PSM6, scored both dichotomously and with partial credit, demonstrated strong, positive, significant correlation. Similar demographic patterns were noted regardless of scoring method. Both scoring methods produced similar results, suggesting that either would be appropriate to use with the PSM6. 
    more » « less
  5. Problem solving is a central focus of mathematics teaching and learning. If teachers are expected to support students' problem-solving development, then it reasons that teachers should also be able to solve problems aligned to grade level content standards. The purpose of this validation study is twofold: (1) to present evidence supporting the use of the Problem Solving Measures Grades 3–5 with preservice teachers (PSTs), and (2) to examine PSTs' abilities to solve problems aligned to grades 3–5 academic content standards. This study used Rasch measurement techniques to support psychometric analysis of the Problem Solving Measures when used with PSTs. Results indicate the Problem Solving Measures are appropriate for use with PSTs, and PSTs' performance on the Problem Solving Measures differed between first-year PSTs and end-of-program PSTs. Implications include program evaluation and the potential benefits of using K-12 student-level assessments as measures of PSTs' content knowledge. 
    more » « less
  6. Lischka, A. ; Dyer, E. ; Jones, R. ; Lovett, J. ; Strayer, J. ; Drown, S. (Ed.)
    Using a test for a purpose it was not intended for can promote misleading results and interpretations, potentially leading to negative consequences from testing (AERA et al., 2014). For example, a mathematics test designed for use with grade 7 students is likely inappropriate for use with grade 3 students. There may be cases when a test can be used with a population related to the intended one; however, validity evidence and claims must be examined. We explored the use of student measures with preservice teachers (PSTs) in a teacher-education context. The present study intends to spark a discussion about using some student measures with teachers. The Problem-solving Measures (PSMs) were developed for use with grades 3-8 students. They measure students’ problem-solving performance within the context of the Common Core State Standards for Mathematics (CCSSI, 2010; see Bostic & Sondergeld, 2015; Bostic et al., 2017; Bostic et al., 2021). After their construction, the developers wondered: If students were expected to engage successfully on the PSMs, then might future grades 3-8 teachers also demonstrate proficiency? 
    more » « less
  7. The COVID-19 pandemic disrupted many school accountability systems that rely on student-level achievement data. Many states encountered uncertainty about how to meet federal accountability requirements without typical school data. Prior research provides evidence that student achievement is correlated to students’ social background, which raises concerns about the predictive bias of accountability systems. This mixed-methods study (a) examines the predictive ability of non-achievement-based variables (i.e., students’ social background) on school districts’ report card letter grade in Ohio, and (b) explores educators’ perceptions of report card grades. Results suggest that social background and community demographic variables have a significant impact on measures of school accountability. 
    more » « less
  8. Lischka, A ; Dyer, E. ; Jones, E. ; Lovett, J. ; Strayer, J. ; Drown, S. (Ed.)
    Using a test for a purpose it was not intended for can promote misleading results and interpretations, potentially leading to negative consequences from testing (AERA et al., 2014). For example, a mathematics test designed for use with grade 7 students is likely inappropriate for use with grade 3 students. There may be cases when a test can be used with a population related to the intended one; however, validity evidence and claims must be examined. We explored the use of student measures with preservice teachers (PSTs) in a teacher-education context. The present study intends to spark a discussion about using some student measures with teachers. The Problem-solving Measures (PSMs) were developed for use with grades 3-8 students. They measure students’ problem-solving performance within the context of the Common Core State Standards for Mathematics (CCSSI, 2010; see Bostic & Sondergeld, 2015; Bostic et al., 2017; Bostic et al., 2021). After their construction, the developers wondered: If students were expected to engage successfully on the PSMs, then might future grades 3-8 teachers also demonstrate proficiency? 
    more » « less
  9. This Research Commentary addresses the need for an instrument abstract—termed an Interpretation and Use Statement (IUS)—to be included when mathematics educators present instruments for use by others in journal articles and other communication venues (e.g., websites and administration manuals). We begin with presenting the need for IUSs, including the importance of a focus on interpretation and use. We then propose a set of elements—identified by a group of mathematics education researchers, instrument developers, and psychometricians—to be included in the IUS. We describe the development process, the recommended elements for inclusion, and two example IUSs. Last, we present why IUSs have the potential to benefit end users and the field of mathematics education. 
    more » « less
  10. Mathematics assessments should allow all students opportunities to demonstrate their knowledge and skills as problem solvers. Looking at textbook word problems, we share a process for revising them using Universal Design for Learning. 
    more » « less