This Brief Report presents an example of assessment validation using an argument-based approach. The instrument we developed is a Brief Assessment of Students’ Mature Number Sense, which measures a central goal in mathematics education. We chose to develop this assessment to provide an efficient way to measure the effect of instructional practices designed to improve students’ number sense. Using an argument-based framework, we first identify our proposed interpretations and uses of student scores. We then outline our argument with three claims that provide evidence connecting students’ responses on the assessment with its intended uses. Finally, we highlight why using argument-based validation benefits measure developers as well as the broader mathematics education community.
more »
« less
A validation argument for the Priorities for Mathematics Instruction (PMI) survey.
Mathematics education needs measures that can be used to research and/or evaluate the impact of professional development for constructs that are broadly relevant to the field. To address this need we developed the Priorities for Mathematics Instruction (PMI) survey consisting of two scales focused on the constructs of Explicit Attention to Concepts (EAC) and Student Opportunities to Struggle (SOS) – which have been linked to increased student understanding and achievement. We identified the most critical assumptions that underlie the proposed interpretation and use of the scale scores and then examined the related validity evidence. We found the evidence for each assumption supports the proposed interpretation and use of the scale scores.
more »
« less
- Award ID(s):
- 1907840
- PAR ID:
- 10336663
- Editor(s):
- Olanoff, D.; Johnson, K.; Spitzer, S. M.
- Date Published:
- Journal Name:
- The 43rd Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education, Virtual/Philadelphia, USA.
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract—This WIP research paper presents validity evidence for a survey instrument designed to assess student learning in makerspaces. We report findings from expert reviews of item content and student interpretations of survey questions. The instrument was developed using a theory-driven approach to define constructs, followed by the development of questions aligned with those constructs. We solicited written feedback from 30 experts in instrument development and/or makerspaces, who rated the alignment of items with our constructs. Based on this input, we revised our items for clarity and consistency. We then conducted 25 cognitive interviews with a diverse group of students who use makerspaces, asking them to explain their understanding of each item and the reasoning behind their responses. Our recruitment ensured diversity in terms of race, gender, ethnicity, and academic background, extending beyond engineering majors. From our initial 45 items, we removed 6, modified 36, and added 1 based on expert feedback. During cognitive interviews, we began with 40 items, deleted one, and revised 23, resulting in 39 items for the pilot survey. Key findings included the value of examples in clarifying broad terms and improved student engagement with a revised rating scale—shifting from a 7-point Likert agreement scale to a self-description format encouraged fuller use of the scale. Our study contributes to the growing body of research on makerspaces by offering insights into how students describe their learning experiences and by providing initial validation evidence for a tool to assess those experiences, ultimately strengthening the credibility of the instrument.more » « less
-
The purpose of this work-in-progress paper is to share insights from current efforts to develop and test the validity of an instrument to measure undergraduate students’ perceived support in science, technology, engineering, and mathematics (STEM). The development and refinement of our survey instrument ultimately functions to extend, operationalize, and empirically test the Model of Co-curricular Support (MCCS). The MCCS is a conceptual framework of student support that demonstrates the breadth of assistance currently used to support undergraduate students in STEM, particularly those from underrepresented groups. We are currently gathering validity evidence for an instrument that evaluates the extent to which colleges of engineering and science offer supportive environments. To date, exploratory factor analysis and correlation for construct validity have helped us develop 14 constructs for student support in STEM. Future work will focus on modeling relationships between these constructs and student outcomes, providing the explanatory power needed to explain empirically how co-curricular supports contribute to different forms of student success in STEM. We hope that operationalizing the MCCS through this survey will shift how we conceptualize and offer student support, enabling college administrators and student support practitioners to evaluate their portfolio of student support efforts.more » « less
-
Cook, S; Katz, B; Moore_Russo, D (Ed.)We report preliminary results of selected questions from a national survey of instructors of geometry courses for secondary teachers about the nature of instructor-student interactions. Survey responses (n= 118) are used to indicate six latent constructs describing aspects of instructor-student interaction that in turn quantify hypothesized characteristics of two didactical contracts, which we call inquiry in geometry and study of geometry. We found that instructors whose highest degree is in mathematics education are less likely to rely on a study of geometry contract than instructors whose highest degree is in mathematics. Also, instructors who have previously taught high school geometry are less likely to lecture.more » « less
-
Although the paradigm wars between quantitative and qualitative research methods and the associated epistemologies may have settled down in recent years within the mathematics education research community, the high value placed on quantitative methods and randomized control trials remain as the gold standard at the policy-making level (USDOE, 2008). Although diverse methods are valued in the mathematics education community, if mathematics educators hope to influence policy to cultivate more equitable education systems, then we must engage in rigorous quantitative research. However, quantitative research is limited in what it can measure by the quantitative tools that exist. In mathematics education, it seems as though the development of quantitative tools and studying their associated validity and reliability evidence has lagged behind the important constructs that rich qualitative research has uncovered. The purpose of this study is to describe quantitative instruments related to mathematics teacher behavior and affect in order to better understand what currently exists in the field, what validity and reliability evidence has been published for such instruments, and what constructs each measure. 1. How many and what types of instruments of mathematics teacher behavior and affect exist? 2. What types of validity and reliability evidence are published for these instruments? 3. What constructs do these instruments measure? 4. To what extent have issues of equity been the focus of the instruments found?more » « less
An official website of the United States government

