The purpose of this working group is to continue to bring together scholars with an interest in examining the use of and access to large-scale quantitative tools used to measure student- and teacher-related outcomes in mathematics education. The working group session will focus on (1) updating the workgroup on the progress made since the first working group at PME-NA in Tucson, Arizona, specifically focusing on the outcomes of the Validity Evidence for Measurement in Mathematics Education conference that took place in April, 2017, in San Antonio, (2) continued development of a document of available tools and their associated validity evidence, and (3) identification of potential follow-up activities to continue this work. The efforts of the group will be summarized and extended through both social media tools and online collaboration tools to further promote this work.
more »
« less
Exploring and examining quantitative measures
The purpose of this working group is to bring together scholars with an interest in examining the research on quantitative tools and measures for gathering meaningful data, and to spark conversations and collaboration across individuals and groups with an interest in synthesizing the literature on large-scale tools used to measure student- and teacher-related outcomes. While syntheses of measures for use in mathematics education can be found in the literature, few can be described as a comprehensive analysis. The working group session will focus on (1) defining terms identified as critical (e.g., large-scale, quantitative, and validity evidence) for bounding the focus of the group, (2) initial development of a document of available tools and their associated validity evidence, and (3) identification of potential follow-up activities to continue the work to identify tools and developed related synthesis documents (e.g., the formation of sub-groups around potential topics of interest). The efforts of the group will be summarized and extended through both social media tools (e.g., creating a Facebook group) and online collaboration tools (e.g., Google hangouts and documents) to further promote this work.
more »
« less
- Award ID(s):
- 1644314
- PAR ID:
- 10027608
- Date Published:
- Journal Name:
- Proceedings for the 38th Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education
- Page Range / eLocation ID:
- 1641-1647
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this theory paper, we set out to consider, as a matter of methodological interest, the use of quantitative measures of inter-coder reliability (e.g., percentage agreement, correlation, Cohen’s Kappa, etc.) as necessary and/or sufficient correlates for quality within qualitative research in engineering education. It is well known that the phrase qualitative research represents a diverse body of scholarship conducted across a range of epistemological viewpoints and methodologies. Given this diversity, we concur with those who state that it is ill advised to propose recipes or stipulate requirements for achieving qualitative research validity and reliability. Yet, as qualitative researchers ourselves, we repeatedly find the need to communicate the validity and reliability—or quality—of our work to different stakeholders, including funding agencies and the public. One method for demonstrating quality, which is increasingly used in qualitative research in engineering education, is the practice of reporting quantitative measures of agreement between two or more people who code the same qualitative dataset. In this theory paper, we address this common practice in two ways. First, we identify instances in which inter-coder reliability measures may not be appropriate or adequate for establishing quality in qualitative research. We query research that suggests that the numerical measure itself is the goal of qualitative analysis, rather than the depth and texture of the interpretations that are revealed. Second, we identify complexities or methodological questions that may arise during the process of establishing inter-coder reliability, which are not often addressed in empirical publications. To achieve this purposes, in this paper we will ground our work in a review of qualitative articles, published in the Journal of Engineering Education, that have employed inter-rater or inter-coder reliability as evidence of research validity. In our review, we will examine the disparate measures and scores (from 40% agreement to 97% agreement) used as evidence of quality, as well as the theoretical perspectives within which these measures have been employed. Then, using our own comparative case study research as an example, we will highlight the questions and the challenges that we faced as we worked to meet rigorous standards of evidence in our qualitative coding analysis, We will explain the processes we undertook and the challenges we faced as we assigned codes to a large qualitative data set approached from a post positivist perspective. We will situate these coding processes within the larger methodological literature and, in light of contrasting literature, we will describe the principled decisions we made while coding our own data. We will use this review of qualitative research and our own qualitative research experiences to elucidate inconsistencies and unarticulated issues related to evidence for qualitative validity as a means to generate further discussion regarding quality in qualitative coding processes.more » « less
-
Miller, B; Martin, C (Ed.)Quantitative measures in mathematics education have informed policies and practices for over a century. Thus, it is critical that such measures in mathematics education have sufficient validity evidence to improve mathematics experiences for students. This article provides a systematic review of the validity evidence related to measures used in elementary mathematics education. The review includes measures that focus on elementary students as the unit of analyses and attends to validity as defined by current conceptions of measurement. Findings suggest that one in ten measures in mathematics education include rigorous evidence to support intended uses. Recommendations are made to support mathematics education researchers to continue to take steps to improve validity evidence in the design and use of quantitative measures.more » « less
-
Abstract Quantitative measures in mathematics education have informed policies and practices for over a century. Thus, it is critical that such measures in mathematics education have sufficient validity evidence to improve mathematics experiences for students. This article provides a systematic review of the validity evidence related to measures used in elementary mathematics education. The review includes measures that focus on elementary students as the unit of analyses and attends to validity as defined by current conceptions of measurement. Findings suggest that one in ten measures in mathematics education include rigorous evidence to support intended uses. Recommendations are made to support mathematics education researchers to continue to take steps to improve validity evidence in the design and use of quantitative measures.more » « less
-
Problem. Extant measures of students’ cybersecurity self-efficacy lack sufficient evidence of validity based on internal structure. Such evidence of validity is needed to enhance confidence in conclusions drawn from use of self-efficacy measures in the cybersecurity domain. Research Question. To address this identified problem, we sought to answer our research question: What is the underlying factor structure of a new self-efficacy for Information Security measure? Method. We leveraged exploratory factor analysis (EFA) to deter- mine the number of factors underlying a new measure of student self-efficacy to conduct information security. This measure was created to align with the five elements of the information security section of the K-12 Cybersecurity Education framework. Participants were 190 undergraduate students recruited from computer science courses across the U.S. Findings. Results from the EFA indicated that a four-factor solution best fit the data while maximizing interpretability of the factors. The internal reliability of the measure was quite strong (𝛼 = .99). Implications. The psychometric quality of this measure was demonstrated, and thus evidence of validity based on internal structure has been established. Future work will conduct a confirmatory factor analysis (CFA) and assess measurement invariance across sub- groups of interest (e.g., over- vs. under-represented race/ethnicity groups, gender).more » « less
An official website of the United States government

