skip to main content


Title: A Revised Measure of Acceptance of the Theory of Evolution: Introducing the MATE 2.0
Hundreds of articles have explored the extent to which individuals accept evolution, and the Measure of Acceptance of the Theory of Evolution (MATE) is the most often used survey. However, research indicates the MATE has limitations, and it has not been updated since its creation more than 20 years ago. In this study, we revised the MATE using information from cognitive interviews with 62 students that revealed response process errors with the original instrument. We found that students answered items on the MATE based on constructs other than their acceptance of evolution, which led to answer choices that did not fully align with their actual acceptance. Students answered items based on their understanding of evolution and the nature of science and different definitions of evolution. We revised items on the MATE, conducted 29 cognitive interviews on the revised version, and administered it to 2881 students in 22 classes. We provide response process validity evidence for the new measure through cognitive interviews with students, structural validity through a Rasch dimensionality analysis, and concurrent validity evidence through correlations with other measures of evolution acceptance. Researchers can now measure student evolution acceptance using this new version of the survey, which we have called the MATE 2.0.  more » « less
Award ID(s):
1818659
NSF-PAR ID:
10381513
Author(s) / Creator(s):
; ; ; ;
Editor(s):
Romine, William
Date Published:
Journal Name:
CBE—Life Sciences Education
Volume:
21
Issue:
1
ISSN:
1931-7913
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background and objectives

    Universities throughout the USA increasingly offer undergraduate courses in evolutionary medicine (EvMed), which creates a need for pedagogical resources. Several resources offer course content (e.g. textbooks) and a previous study identified EvMed core principles to help instructors set learning goals. However, assessment tools are not yet available. In this study, we address this need by developing an assessment that measures students’ ability to apply EvMed core principles to various health-related scenarios.

    Methodology

    The EvMed Assessment (EMA) consists of questions containing a short description of a health-related scenario followed by several likely/unlikely items. We evaluated the assessment’s validity and reliability using a variety of qualitative (expert reviews and student interviews) and quantitative (Cronbach’s α and classical test theory) methods. We iteratively revised the assessment through several rounds of validation. We then administered the assessment to undergraduates in EvMed and Evolution courses at multiple institutions.

    Results

    We used results from the pilot to create the EMA final draft. After conducting quantitative validation, we deleted items that failed to meet performance criteria and revised items that exhibited borderline performance. The final version of the EMA consists of six core questions containing 25 items, and five supplemental questions containing 20 items.

    Conclusions and implications

    The EMA is a pedagogical tool supported by a wide range of validation evidence. Instructors can use it as a pre/post measure of student learning in an EvMed course to inform curriculum revision, or as a test bank to draw upon when developing in-class assessments, quizzes or exams.

     
    more » « less
  2. The purpose of this study is to develop an instrument to measure student perceptions about the learning experiences in their online undergraduate engineering courses. Online education continues to grow broadly in higher education, but the movement toward acceptance and comprehensive utilization of online learning has generally been slower in engineering. Recently, however, there have been indicators that this could be changing. For example, ABET has accredited online undergraduate engineering degrees at Stony Brook University and Arizona State University (ASU), and an increasing number of other undergraduate engineering programs also offer online courses. During this period of transition in engineering education, further investigation about the online modality in the context of engineering education is needed, and survey instrumentation can support such investigations. The instrument presented in this paper is grounded in a Model for Online Course-level Persistence in Engineering (MOCPE), which was developed by our research team by combining two motivational frameworks used to study student persistence: the Expectancy x Value Theory of Achievement Motivation (EVT), and the ARCS model of motivational design. The initial MOCPE instrument contained 79 items related to students’ perceptions about the characteristics of their courses (i.e., the online learning management system, instructor practices, and peer support), expectancies of course success, course task values, perceived course difficulties, and intention to persist in the course. Evidence of validity and reliability was collected using a three-step process. First, we tested face and content validity of the instrument with experts in online engineering education and online undergraduate engineering students. Next, the survey was administered to the online undergraduate engineering student population at a large, Southwestern public university, and an exploratory factor analysis (EFA) was conducted on the responses. Lastly, evidence of reliability was obtained by computing the internal consistency of each resulting scale. The final instrument has seven scales with 67 items across 10 factors. The Cronbach alpha values for these scales range from 0.85 to 0.97. The full paper will provide complete details about the development and psychometric evaluation of the instrument, including evidence of and reliability. The instrument described in this paper will ultimately be used as part of a larger, National Science Foundation-funded project investigating the factors influencing online undergraduate engineering student persistence. It is currently being used in the context of this project to conduct a longitudinal study intended to understand the relationships between the experiences of online undergraduate engineering students in their courses and their intentions to persist in the course. We anticipate that the instrument will be of interest and use to other engineering education researchers who are also interested in studying the population of online students. 
    more » « less
  3. Abstract Background

    As technology moves rapidly forward and our world becomes more interconnected, we are seeing increases in the complexity and challenge associated with scientific problems. More than ever before, scientists will need to be resilient and able to cope with challenges and failures en route to success. However, we still understand relatively little about how these skills manifest in STEM contexts broadly, and how they are developed by STEM undergraduate students. While recent studies have begun to explore this area, no measures exist that are specifically designed to assess coping behaviors in STEM undergraduate contexts at scale. Fortunately, multiple measures of coping do exist and have been previously used in more general contexts. Drawing strongly from items used in the COPE and Brief COPE, we gathered a pool of items anticipated to be good measures of undergraduate students’ coping behaviors in STEM. We tested the validity of these items for use with STEM students using exploratory factor analyses, confirmatory factor analyses, and cognitive interviews. In particular, our confirmatory factor analyses and cognitive interviews explored whether the items measured coping for persons excluded due to ethnicity or race (PEERs).

    Results

    Our analyses revealed two versions of what we call the STEM-COPE instrument that accurately measure several dimensions of coping for undergraduate STEM students. One version is more fine-grained. We call this the Coping Behaviors version, since it is more specific in its description of coping actions. The other contains some specific scales and two omnibus scales that describe what we call challenge-engaging and challenge-avoiding coping. This version is designated the Coping Styles version. We confirmed that both versions can be used reliably in PEER and non-PEER populations.

    Conclusions

    The final products of our work are two versions of the STEM-COPE. Each version measures several dimensions of coping that can be used in individual classrooms or across contexts to assess STEM undergraduate students’ coping with challenges or failures. Each version can be used as a whole, or individual scales can be adopted and used for more specific studies. This work also highlights the need to either develop or adapt other existing measures for use with undergraduate STEM students, and more specifically, for use with sub-populations within STEM who have been historically marginalized or minoritized.

     
    more » « less
  4. null (Ed.)
    Abstract Background Individuals on the autism spectrum are reported to display alterations in interoception, the sense of the internal state of the body. The Interoception Sensory Questionnaire (ISQ) is a 20-item self-report measure of interoception specifically intended to measure this construct in autistic people. The psychometrics of the ISQ, however, have not previously been evaluated in a large sample of autistic individuals. Methods Using confirmatory factor analysis, we evaluated the latent structure of the ISQ in a large online sample of adults on the autism spectrum and found that the unidimensional model fit the data poorly. Using misspecification analysis to identify areas of local misfit and item response theory to investigate the appropriateness of the seven-point response scale, we removed redundant items and collapsed the response options to put forth a novel eight-item, five-response choice ISQ. Results The revised, five-response choice ISQ (ISQ-8) showed much improved fit while maintaining high internal reliability. Differential item functioning (DIF) analyses indicated that the items of the ISQ-8 were answered in comparable ways by autistic adolescents and adults and across multiple other sociodemographic groups. Limitations Our results were limited by the fact that we did not collect data for typically developing controls, preventing the analysis of DIF by diagnostic status. Additionally, while this study proposes a new 5-response scale for the ISQ-8, our data were not collected using this method; thus, the psychometric properties for the revised version of this instrument require further investigation. Conclusion The ISQ-8 shows promise as a reliable and valid measure of interoception in adolescents and adults on the autism spectrum, but additional work is needed to examine its psychometrics in this population. A free online score calculator has been created to facilitate the use of ISQ-8 latent trait scores for further studies of autistic adolescents and adults (available at https://asdmeasures.shinyapps.io/ISQ_score/ ). 
    more » « less
  5. This evidence-based practices paper discusses the method employed in validating the use of a project modified version of the PROCESS tool (Grigg, Van Dyken, Benson, & Morkos, 2013) for measuring student problem solving skills. The PROCESS tool allows raters to score students’ ability in the domains of Problem definition, Representing the problem, Organizing information, Calculations, Evaluating the solution, Solution communication, and Self-assessment. Specifically, this research compares student performance on solving traditional textbook problems with novel, student-generated learning activities (i.e. reverse engineering videos in order to then create their own homework problem and solution). The use of student-generated learning activities to assess student problem solving skills has theoretical underpinning in Felder’s (1987) work of “creating creative engineers,” as well as the need to develop students’ abilities to transfer learning and solve problems in a variety of real world settings. In this study, four raters used the PROCESS tool to score the performance of 70 students randomly selected from two undergraduate chemical engineering cohorts at two Midwest universities. Students from both cohorts solved 12 traditional textbook style problems and students from the second cohort solved an additional nine student-generated video problems. Any large scale assessment where multiple raters use a rating tool requires the investigation of several aspects of validity. The many-facets Rasch measurement model (MFRM; Linacre, 1989) has the psychometric properties to determine if there are any characteristics other than “student problem solving skills” that influence the scores assigned, such as rater bias, problem difficulty, or student demographics. Before implementing the full rating plan, MFRM was used to examine how raters interacted with the six items on the modified PROCESS tool to score a random selection of 20 students’ performance in solving one problem. An external evaluator led “inter-rater reliability” meetings where raters deliberated rationale for their ratings and differences were resolved by recourse to Pretz, et al.’s (2003) problem-solving cycle that informed the development of the PROCESS tool. To test the new understandings of the PROCESS tool, raters were assigned to score one new problem from a different randomly selected group of six students. Those results were then analyzed in the same manner as before. This iterative process resulted in substantial increases in reliability, which can be attributed to increased confidence that raters were operating with common definitions of the items on the PROCESS tool and rating with consistent and comparable severity. This presentation will include examples of the student-generated problems and a discussion of common discrepancies and solutions to the raters’ initial use of the PROCESS tool. Findings as well as the adapted PROCESS tool used in this study can be useful to engineering educators and engineering education researchers. 
    more » « less