skip to main content


Title: Exploring and examining quantitative measures
The purpose of this working group is to bring together scholars with an interest in examining the research on quantitative tools and measures for gathering meaningful data, and to spark conversations and collaboration across individuals and groups with an interest in synthesizing the literature on large-scale tools used to measure student- and teacher-related outcomes. While syntheses of measures for use in mathematics education can be found in the literature, few can be described as a comprehensive analysis. The working group session will focus on (1) defining terms identified as critical (e.g., large-scale, quantitative, and validity evidence) for bounding the focus of the group, (2) initial development of a document of available tools and their associated validity evidence, and (3) identification of potential follow-up activities to continue the work to identify tools and developed related synthesis documents (e.g., the formation of sub-groups around potential topics of interest). The efforts of the group will be summarized and extended through both social media tools (e.g., creating a Facebook group) and online collaboration tools (e.g., Google hangouts and documents) to further promote this work.  more » « less
Award ID(s):
1644314
NSF-PAR ID:
10027608
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings for the 38th Annual Meeting of the North American Chapter of the International Group for the Psychology of Mathematics Education
Page Range / eLocation ID:
1641-1647
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The purpose of this working group is to continue to bring together scholars with an interest in examining the use of and access to large-scale quantitative tools used to measure student- and teacher-related outcomes in mathematics education. The working group session will focus on (1) updating the workgroup on the progress made since the first working group at PME-NA in Tucson, Arizona, specifically focusing on the outcomes of the Validity Evidence for Measurement in Mathematics Education conference that took place in April, 2017, in San Antonio, (2) continued development of a document of available tools and their associated validity evidence, and (3) identification of potential follow-up activities to continue this work. The efforts of the group will be summarized and extended through both social media tools and online collaboration tools to further promote this work. 
    more » « less
  2. In this theory paper, we set out to consider, as a matter of methodological interest, the use of quantitative measures of inter-coder reliability (e.g., percentage agreement, correlation, Cohen’s Kappa, etc.) as necessary and/or sufficient correlates for quality within qualitative research in engineering education. It is well known that the phrase qualitative research represents a diverse body of scholarship conducted across a range of epistemological viewpoints and methodologies. Given this diversity, we concur with those who state that it is ill advised to propose recipes or stipulate requirements for achieving qualitative research validity and reliability. Yet, as qualitative researchers ourselves, we repeatedly find the need to communicate the validity and reliability—or quality—of our work to different stakeholders, including funding agencies and the public. One method for demonstrating quality, which is increasingly used in qualitative research in engineering education, is the practice of reporting quantitative measures of agreement between two or more people who code the same qualitative dataset. In this theory paper, we address this common practice in two ways. First, we identify instances in which inter-coder reliability measures may not be appropriate or adequate for establishing quality in qualitative research. We query research that suggests that the numerical measure itself is the goal of qualitative analysis, rather than the depth and texture of the interpretations that are revealed. Second, we identify complexities or methodological questions that may arise during the process of establishing inter-coder reliability, which are not often addressed in empirical publications. To achieve this purposes, in this paper we will ground our work in a review of qualitative articles, published in the Journal of Engineering Education, that have employed inter-rater or inter-coder reliability as evidence of research validity. In our review, we will examine the disparate measures and scores (from 40% agreement to 97% agreement) used as evidence of quality, as well as the theoretical perspectives within which these measures have been employed. Then, using our own comparative case study research as an example, we will highlight the questions and the challenges that we faced as we worked to meet rigorous standards of evidence in our qualitative coding analysis, We will explain the processes we undertook and the challenges we faced as we assigned codes to a large qualitative data set approached from a post positivist perspective. We will situate these coding processes within the larger methodological literature and, in light of contrasting literature, we will describe the principled decisions we made while coding our own data. We will use this review of qualitative research and our own qualitative research experiences to elucidate inconsistencies and unarticulated issues related to evidence for qualitative validity as a means to generate further discussion regarding quality in qualitative coding processes. 
    more » « less
  3. Miller, B ; Martin, C (Ed.)
    Quantitative measures in mathematics education have informed policies and practices for over a century. Thus, it is critical that such measures in mathematics education have sufficient validity evidence to improve mathematics experiences for students. This article provides a systematic review of the validity evidence related to measures used in elementary mathematics education. The review includes measures that focus on elementary students as the unit of analyses and attends to validity as defined by current conceptions of measurement. Findings suggest that one in ten measures in mathematics education include rigorous evidence to support intended uses. Recommendations are made to support mathematics education researchers to continue to take steps to improve validity evidence in the design and use of quantitative measures. 
    more » « less
  4. Abstract

    Quantitative measures in mathematics education have informed policies and practices for over a century. Thus, it is critical that such measures in mathematics education have sufficient validity evidence to improve mathematics experiences for students. This article provides a systematic review of the validity evidence related to measures used in elementary mathematics education. The review includes measures that focus on elementary students as the unit of analyses and attends to validity as defined by current conceptions of measurement. Findings suggest that one in ten measures in mathematics education include rigorous evidence to support intended uses. Recommendations are made to support mathematics education researchers to continue to take steps to improve validity evidence in the design and use of quantitative measures.

     
    more » « less
  5. National Science Foundation (NSF) funded Engineering Research Centers (ERC) must complement their technical research with various education and outreach opportunities to: 1) improve and promote engineering education, both within the center and to the local community; 2) encourage and include the underrepresented populations to participate in Engineering activities; and 3) advocate communication and collaboration between industry and academia. ERCs ought to perform an adequate evaluation of their educational and outreach programs to ensure that beneficial goals are met. Each ERC has complete autonomy in conducting and reporting such evaluation. Evaluation tools used by individual ERCs are quite similar, but each ERC has designed their evaluation processes in isolation, including evaluation tools such as survey instruments, interview protocols, focus group protocols, and/or observation protocols. These isolated efforts resulted in redundant resources spent and lacking outcome comparability across ERCs. Leaders from three different ERCs led and initiated a collaborative effort to address the above issue by building a suite of common evaluation instruments that all current and future ERCs can use. This leading group consists of education directors and external evaluators from all three partners ERCs and engineering education researchers, who have worked together for two years. The project intends to address the four ERC program clusters: Broadening Participation in Engineering, Centers and Networks, Engineering Education, and Engineering Workforce Development. The instruments developed will pay attention to culture of inclusion, outreach activities, mentoring experience, and sustained interest in engineering. The project will deliver best practices in education program evaluation, which will not only support existing ERCs, but will also serve as immediate tools for brand new ERCs and similar large-scale research centers. Expanding the research beyond TEEC and sharing the developed instruments with NSF as well as other ERCs will also promote and encourage continual cross-ERC collaboration and research. Further, the joint evaluation will increase the evaluation consistency across all ERC education programs. Embedded instrumental feedback loops will lead to continual improvement to ERC education performance and support the growth of an inclusive and innovative engineering workforce. Four major deliveries are planned. First, develop a common quantitative assessment instrument, named Multi-ERC Instrument Inventory (MERCII). Second, develop a set of qualitative instruments to complement MERCII. Third, create a web-based evaluation platform for MERCII. Fourth, update the NSF ERC education program evaluation best practice manual. These deliveries together will become part of and supplemented by an ERC evaluator toolbox. This project strives to significantly impact how ERCs evaluate their educational and outreach programs. Single ERC based studies lack the sample size to truly test the validity of any evaluation instruments or measures. A common suite of instruments across ERCs would provide an opportunity for a large scale assessment study. The online platform will further provide an easy-to-use tool for all ERCs to facilitate evaluation, share data, and reporting impacts. 
    more » « less