Title: A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0
Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using information from cognitive interviews with 62 students that revealed response process errors with the original instrument. We found that students answered items on the MATE based on constructs other than their acceptance of evolution, which led to answer choices that did not fully align with their actual acceptance. Students answered items based on their understanding of evolution and the nature of science and different definitions of evolution. We revised items on the MATE, conducted 29 cognitive interviews on the revised version, and administered it to 2881 students in 22 classes. We provide response process validity evidence for the new measure through cognitive interviews with students, structural validity through a Rasch dimensionality analysis, and concurrent validity evidence through correlations with other measures of evolution acceptance. Researchers can now measure student evolution acceptance using this new version of the survey, which we have called the MATE 2.0. more »« less
Misheva, Taya; Brownell, Sara E.; Barnes, M. Elizabeth
(, CBE—Life Sciences Education)
Offerdahl, Erika
(Ed.)
In this study, the authors have examined the response-process validity of two recent measures of student evolution acceptance, the Inventory of Student Evolution Acceptance (I-SEA) and the Generalized Acceptance of Evolution Evaluation (GAENE), using student interviews. They found several validity issues which can inform future study design and survey improvement.
Ketterlin-Geller, Leanne R; Haider, Muhammad Qadeer; McMurrer, Jennifer
(, Educational Assessment)
This article illustrates and differentiates the unique role cognitive interviews and think-aloud interviews play in developing and validating assessments. Specifically, we describe the use of (a) cognitive interviews to gather empirical evidence to support claims about the intended construct being measured and (b) think-aloud interviews to gather evidence about the problem-solving processes students use while completing tasks assessing the intended construct. We illustrate their use in the context of a classroom assessment of an early mathematics construct – numeric relational reasoning – for kindergarten through Grade 2 students. This assessment is intended to provide teachers with data to guide their instructional decisions. We conducted 64 cognitive interviews with 32 students to collect evidence about students’ understanding of the construct. We conducted 106 think-aloud interviews with 14 students to understand how the prototypical items elicited the intended construct. The task-based interview results iteratively informed assessment development and contributed important sources of validity evidence.
Procko, Kristen; Vardar-Ulu, Didem; Beckham, Josh
(, In Federation of European Biochemical Societies (FEBS))
For over a decade, the BioMolViz group has been working to improve biomolecular visualization instruction and assessment. Through workshops that engaged educators in visual assessment writing and revision, this community has produced hundreds of assessment items, a subset of which are freely available to educators through online repository, the BioMolViz Library. Assessment items are at various stages of a validation process developed by BioMolViz. To establish evidence of validity, these items were iteratively revised by instructors, reviewed by an expert panel, and tested in classrooms. Here, we describe the results of the final phase our validation process, which involved classroom testing across 10 United Statesbased colleges and universities with over 700 students. Classical test theory was applied to evaluate 26 multiplechoice or multipleselect items divided across two assessment sets. The results indicate that the validation process was successful in producing assessments that performed within our defined ideal range for difficulty and discrimination indices, with only four items outside of this scale. However, some assessments showed performance differences among student demographic groups. Thus, we added an interview phase to our process, which involved 20 student participants across three institutions. In these semistructured group interviews, students described their problemsolving strategies, adding their unique insights as the discussion progressed. As these interview transcripts were qualitatively coded, areas to further improve assessment items were identified. We will illustrate the progression of several items through the entire validation process and discuss how student problem solving strategies can be leveraged to guide effective assessment design.
Abstract Teachers must know how to use language to support students in knowledge generation environments that align to the Next Generation Science Standards. To measure this knowledge, this study refines a survey on teachers’ knowledge of language as an epistemic tool. Rasch modelling was used to examine 15 items’ fit statistics and the functioning of a previously-designed questionnaire’s response categories. Cronbach’s alpha reliability was also examined. Additionally, interviews were used to investigate teachers’ interpretations of each item to identify ambiguous items. The results indicated that three ambiguous items were deleted based on qualitative data and three more items were deleted because of negative correlation and mismatched fit statistics. Finally, we present a revised language questionnaire with nine items and acceptable correlation and good fit statistics, with utility for science education researchers and teacher educators. This research contributes a revised questionnaire to measure teachers’ knowledge of language that could inform professional development efforts. This research also describes instrument refinement processes that could be applied elsewhere.
Barnes, M. Elizabeth; Supriya, K.; Zheng, Yi; Roberts, Julie A.; Brownell, Sara E.
(, CBE—Life Sciences Education)
Price, Rebecca
(Ed.)
Evolution is controversial among students and religiosity, religious affiliation, understanding of evolution, and demographics are predictors of evolution acceptance. However, quantitative research has not explored the unique impact of student perceived conflict between their religion and evolution as a major factor influencing evolution acceptance. We developed an instrument with validity evidence called “Perceived Conflict between Evolution and Religion” (PCoRE). Using this measure, we find that, among students in 26 biology courses in 11 states, adding student perceived conflict between their religion and evolution to linear mixed models more than doubled the capacity of the models to predict evolution acceptance compared with models that only included religiosity, religious affiliation, understanding of evolution, and demographics. Student perceived conflict between evolution and their religion was the strongest predictor of evolution acceptance among all variables and mediated the impact of religiosity on evolution acceptance. These results build upon prior literature that suggests that reducing perceived conflict between students’ religious beliefs and evolution can help raise evolution acceptance levels. Further, these results indicate that including measures of perceived conflict between religion and evolution in evolution acceptance studies in the future is important.
Barnes, M. Elizabeth, Misheva, Taya, Supriya, K., Rutledge, Michael, and Brownell, Sara E. A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0. Retrieved from https://par.nsf.gov/biblio/10381513. CBE—Life Sciences Education 21.1 Web. doi:10.1187/cbe.21-05-0127.
Barnes, M. Elizabeth, Misheva, Taya, Supriya, K., Rutledge, Michael, & Brownell, Sara E. A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0. CBE—Life Sciences Education, 21 (1). Retrieved from https://par.nsf.gov/biblio/10381513. https://doi.org/10.1187/cbe.21-05-0127
Barnes, M. Elizabeth, Misheva, Taya, Supriya, K., Rutledge, Michael, and Brownell, Sara E.
"A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0". CBE—Life Sciences Education 21 (1). Country unknown/Code not available. https://doi.org/10.1187/cbe.21-05-0127.https://par.nsf.gov/biblio/10381513.
@article{osti_10381513,
place = {Country unknown/Code not available},
title = {A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0},
url = {https://par.nsf.gov/biblio/10381513},
DOI = {10.1187/cbe.21-05-0127},
abstractNote = {Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using information from cognitive interviews with 62 students that revealed response process errors with the original instrument. We found that students answered items on the MATE based on constructs other than their acceptance of evolution, which led to answer choices that did not fully align with their actual acceptance. Students answered items based on their understanding of evolution and the nature of science and different definitions of evolution. We revised items on the MATE, conducted 29 cognitive interviews on the revised version, and administered it to 2881 students in 22 classes. We provide response process validity evidence for the new measure through cognitive interviews with students, structural validity through a Rasch dimensionality analysis, and concurrent validity evidence through correlations with other measures of evolution acceptance. Researchers can now measure student evolution acceptance using this new version of the survey, which we have called the MATE 2.0.},
journal = {CBE—Life Sciences Education},
volume = {21},
number = {1},
author = {Barnes, M. Elizabeth and Misheva, Taya and Supriya, K. and Rutledge, Michael and Brownell, Sara E.},
editor = {Romine, William}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.