skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, May 17 until 8:00 AM ET on Saturday, May 18 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Bright, D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID-19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test. A two-way ANOVA showed that fourth-grade MAP mathematics scores were statistically similar for the 2019 pre-COVID cohort (n = 31) and 2020 COVID-19 cohort (n = 82), and by gender group, between Fall 2019 and Fall 2020. Implications for rural students’ academic performance in virtual learning environments are discussed. 
    more » « less
  2. In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID- 19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test. A two-way ANOVA showed that fourth- grade MAP mathematics scores were statistically similar for the 2019 pre-COVID cohort (n = 31) and 2020 COVID-19 cohort (n = 82), and by gender group, between Fall 2019 and Fall 2020. Implications for rural students’ academic performance in virtual learning environments are discussed. 
    more » « less
  3. The Delphi method has been adapted to inform item refinements in educational and psychological assessment development. An explanatory sequential mixed methods design using Delphi is a common approach to gain experts' insight into why items might have exhibited differential item functioning (DIF) for a sub-group, indicating potential item bias. Use of Delphi before quantitative field testing to screen for potential sources leading to item bias is lacking in the literature. An exploratory sequential design is illustrated as an additional approach using a Delphi technique in Phase I and Rasch DIF analyses in Phase II. We introduce the 2 × 2 Concordance Integration Typology as a systematic way to examine agreement and disagreement across the qualitative and quantitative findings using a concordance joint display table. A worked example from the development of the Problem-Solving Measures Grades 6–8 Computer Adaptive Tests supported using an exploratory sequential design to inform item refinement. The 2 × 2 Concordance Integration Typology (a) crystallized instances where additional refinements were potentially needed and (b) provided for evaluating the distribution of bias across the set of items as a whole. Implications are discussed for advancing data integration techniques and using mixed methods to improve instrument development. 
    more » « less
  4. This study presents qualitative findings from a larger instrument validation study. Undergraduates and subject matter experts (SMEs) were pivotal in early-stage development of a survey focusing on the four domains of Validation Theory (academic-in-class, academic-out-of-class, interpersonal-in-class, interpersonal-out-of-class). An iterative approach allowed for a more rigorously constructed survey refined through multiple phases. The research team met regularly to determine how feedback from undergraduates and SMEs could improve items and if certain populations were potentially being excluded. To date, the research team has expanded on the original 47 items up to 51 to address feedback provided by SMEs and undergraduate participants. Numerous item wording revisions have been made. Support for content, response process, and consequential validity evidence is strong. 
    more » « less
  5. Existing literature has established that interpersonal and academic validating experiences help provide college students with the necessary personal and scholastic skillsets to thrive in higher education (e.g., Coronella, 2018; Ekal et al., 2011). This intrinsic mixed methods case study explores the extent to which undergraduate students perceived academic and interpersonal validation within a science, technology, engineering, and mathematics (STEM) pipeline program (CMSP) can empower students and influence their attitudes towards their learning environment. 
    more » « less
  6. In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID- 19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test. A two-way ANOVA showed that fourth- grade MAP mathematics scores were statistically similar for the 2019 pre-COVID cohort (n = 31) and 2020 COVID-19 cohort (n = 82), and by gender group, between Fall 2019 and Fall 2020. Implications for rural students’ academic performance in virtual learning environments are discussed. 
    more » « less
  7. In the United States, national and state standardized assessments have become a metric for measuring student learning and high-quality learning environments. As the COVID- 19 pandemic offered a multitude of learning modalities (e.g., hybrid, socially distanced face-to-face instruction, virtual environment), it becomes critical to examine how this learning disruption influenced elementary mathematic performance. This study tested for differences in mathematics performance on fourth grade standardized tests before and during COVID-19 in a case study of a rural Ohio school district using the Measure of Academic Progress (MAP) mathematics test. A two-way ANOVA showed that fourth- grade MAP mathematics scores were statistically similar for the 2019 pre-COVID cohort (n = 31) and 2020 COVID-19 cohort (n = 82), and by gender group, between Fall 2019 and Fall 2020. Implications for rural students’ academic performance in virtual learning environments are discussed. 
    more » « less
  8. Olanoff, D ; Johnson, K. ; Spitzer, S (Ed.)
    The COVID-19 pandemic has ravaged onward over the last year and has greatly impacted student learning. An average student is predicted to fall behind approximately seven months academically; however, this learning gap predicts Latinx and Black students will fall behind by 9 and 10 months, respectively (Seiden, 2020). Moreover, the shift to online instruction impacted students’ ability to learn as they encountered new stressors, anxiety, illness, and the pandemic’s psychological effects (Middleton, 2020). Despite the unprecedented circumstances that students were precipitously thrust into, state testing and assessments continue. Assessments during the pandemic are likely to produce invalid results due to “test pollution,” which refers to the systemic “increase or decrease in test scores unrelated to the content domain” (Middleton, 2020, p. 2). Considering the global pandemic, test pollution is prominent and worth exploring as it is uncertain whether state testing can identify the impact COVID is having on student learning. 
    more » « less
  9. The Standards for educational and psychological assessment were developed by the American Educational Research Association, American Psychological Association, and National Council on Measurement in Education (AERA et al., 2014). The Standards specify assessment developers establish five types of validity evidence: test content, response processes, internal structure, relationship to other variables, and consequential/bias. Relevant to this proposal is consequential validity evidence that identifies the potential negative impact of testing or bias. Standard 3.1 of The Standards (2014) on fairness in testing states that “those responsible for test development, revision, and administration should design all steps of the testing process to promote valid score interpretations for intended score uses for the widest possible range of individuals and relevant sub-groups in the intended populations” (p. 63). Three types of bias include construct, method, and item bias (Boer et al., 2018). Testing for differential item functioning (DIF) is a standard analysis adopted to detect item bias against a subgroup (Boer et al., 2018). Example subgroups include gender, race/ethnic group, socioeconomic status, native language, or disability. DIF is when “equally able test takers differ in their probabilities answering a test item correctly as a function of group membership” (AERA et al., 2005, p. 51). DIF indicates systematic error as compared to real mean group differences (Camilli & Shepard, 1994). Items exhibiting significant DIF are removed or reviewed for sources leading to bias to determine modifications to retain and further test an item. The Delphi technique is an emergent systematic research method whereby expert panel members review item content through an iterative process (Yildirim & Büyüköztürk, 2018). Experts independently evaluate each item for potential sources leading to DIF, researchers group their responses, and experts then independently complete a survey to rate their level of agreement with the anonymously grouped responses. This process continues until saturation and consensus are reached among experts as established through some criterion (e.g., median agreement rating, item quartile range, and percent agreement). The technique allows researchers to “identify, learn, and share the ideas of experts by searching for agreement among experts” (Yildirim & Büyüköztürk, 2018, p. 451). Research has illustrated this technique applied after DIF is detected, but not before administering items in the field. The current research is a methodological illustration of the Delphi technique applied in the item construction phase of assessment development as part of a five-year study to develop and test new problem-solving measures (PSM; Bostic et al., 2015, 2017) for U.S.A. grades 6-8 in a computer adaptive testing environment. As part of an iterative design-science-based methodology (Middleton et al., 2008), we illustrate the integration of the Delphi technique into the item writing process. Results from two three-person panels each reviewing a set of 45 PSM items are utilized to illustrate the technique. Advantages and limitations identified through a survey by participating experts and researchers are outlined to advance the method. 
    more » « less