skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0
Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using information from cognitive interviews with 62 students that revealed response process errors with the original instrument. We found that students answered items on the MATE based on constructs other than their acceptance of evolution, which led to answer choices that did not fully align with their actual acceptance. Students answered items based on their understanding of evolution and the nature of science and different definitions of evolution. We revised items on the MATE, conducted 29 cognitive interviews on the revised version, and administered it to 2881 students in 22 classes. We provide response process validity evidence for the new measure through cognitive interviews with students, structural validity through a Rasch dimensionality analysis, and concurrent validity evidence through correlations with other measures of evolution acceptance. Researchers can now measure student evolution acceptance using this new version of the survey, which we have called the MATE 2.0.  more » « less
Award ID(s):
1818659
PAR ID:
10381513
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Romine, William
Date Published:
Journal Name:
CBE—Life Sciences Education
Volume:
21
Issue:
1
ISSN:
1931-7913
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract—This WIP research paper presents validity evidence for a survey instrument designed to assess student learning in makerspaces. We report findings from expert reviews of item content and student interpretations of survey questions. The instrument was developed using a theory-driven approach to define constructs, followed by the development of questions aligned with those constructs. We solicited written feedback from 30 experts in instrument development and/or makerspaces, who rated the alignment of items with our constructs. Based on this input, we revised our items for clarity and consistency. We then conducted 25 cognitive interviews with a diverse group of students who use makerspaces, asking them to explain their understanding of each item and the reasoning behind their responses. Our recruitment ensured diversity in terms of race, gender, ethnicity, and academic background, extending beyond engineering majors. From our initial 45 items, we removed 6, modified 36, and added 1 based on expert feedback. During cognitive interviews, we began with 40 items, deleted one, and revised 23, resulting in 39 items for the pilot survey. Key findings included the value of examples in clarifying broad terms and improved student engagement with a revised rating scale—shifting from a 7-point Likert agreement scale to a self-description format encouraged fuller use of the scale. Our study contributes to the growing body of research on makerspaces by offering insights into how students describe their learning experiences and by providing initial validation evidence for a tool to assess those experiences, ultimately strengthening the credibility of the instrument. 
    more » « less
  2. Offerdahl, Erika (Ed.)
    In this study, the authors have examined the response-process validity of two recent measures of student evolution acceptance, the Inventory of Student Evolution Acceptance (I-SEA) and the Generalized Acceptance of Evolution Evaluation (GAENE), using student interviews. They found several validity issues which can inform future study design and survey improvement. 
    more » « less
  3. This article illustrates and differentiates the unique role cognitive interviews and think-aloud interviews play in developing and validating assessments. Specifically, we describe the use of (a) cognitive interviews to gather empirical evidence to support claims about the intended construct being measured and (b) think-aloud interviews to gather evidence about the problem-solving processes students use while completing tasks assessing the intended construct. We illustrate their use in the context of a classroom assessment of an early mathematics construct – numeric relational reasoning – for kindergarten through Grade 2 students. This assessment is intended to provide teachers with data to guide their instructional decisions. We conducted 64 cognitive interviews with 32 students to collect evidence about students’ understanding of the construct. We conducted 106 think-aloud interviews with 14 students to understand how the prototypical items elicited the intended construct. The task-based interview results iteratively informed assessment development and contributed important sources of validity evidence. 
    more » « less
  4. For over a decade, the BioMolViz group has been working to improve biomolecular visualization instruction and assessment. Through workshops that engaged educators in visual assessment writing and revision, this community has produced hundreds of assessment items, a subset of which are freely available to educators through online repository, the BioMolViz Library. Assessment items are at various stages of a validation process developed by BioMolViz. To establish evidence of validity, these items were iteratively revised by instructors, reviewed by an expert panel, and tested in classrooms. Here, we describe the results of the final phase our validation process, which involved classroom testing across 10 United Statesbased colleges and universities with over 700 students. Classical test theory was applied to evaluate 26 multiplechoice or multipleselect items divided across two assessment sets. The results indicate that the validation process was successful in producing assessments that performed within our defined ideal range for difficulty and discrimination indices, with only four items outside of this scale. However, some assessments showed performance differences among student demographic groups. Thus, we added an interview phase to our process, which involved 20 student participants across three institutions. In these semistructured group interviews, students described their problemsolving strategies, adding their unique insights as the discussion progressed. As these interview transcripts were qualitatively coded, areas to further improve assessment items were identified. We will illustrate the progression of several items through the entire validation process and discuss how student problem solving strategies can be leveraged to guide effective assessment design. 
    more » « less
  5. Abstract Teachers must know how to use language to support students in knowledge generation environments that align to the Next Generation Science Standards. To measure this knowledge, this study refines a survey on teachers’ knowledge of language as an epistemic tool. Rasch modelling was used to examine 15 items’ fit statistics and the functioning of a previously-designed questionnaire’s response categories. Cronbach’s alpha reliability was also examined. Additionally, interviews were used to investigate teachers’ interpretations of each item to identify ambiguous items. The results indicated that three ambiguous items were deleted based on qualitative data and three more items were deleted because of negative correlation and mismatched fit statistics. Finally, we present a revised language questionnaire with nine items and acceptable correlation and good fit statistics, with utility for science education researchers and teacher educators. This research contributes a revised questionnaire to measure teachers’ knowledge of language that could inform professional development efforts. This research also describes instrument refinement processes that could be applied elsewhere. 
    more » « less