More Like this
-
Abstract Background and objectives Universities throughout the USA increasingly offer undergraduate courses in evolutionary medicine (EvMed), which creates a need for pedagogical resources. Several resources offer course content (e.g. textbooks) and a previous study identified EvMed core principles to help instructors set learning goals. However, assessment tools are not yet available. In this study, we address this need by developing an assessment that measures students’ ability to apply EvMed core principles to various health-related scenarios.
Methodology The EvMed Assessment (EMA) consists of questions containing a short description of a health-related scenario followed by several likely/unlikely items. We evaluated the assessment’s validity and reliability using a variety of qualitative (expert reviews and student interviews) and quantitative (Cronbach’s α and classical test theory) methods. We iteratively revised the assessment through several rounds of validation. We then administered the assessment to undergraduates in EvMed and Evolution courses at multiple institutions.
Results We used results from the pilot to create the EMA final draft. After conducting quantitative validation, we deleted items that failed to meet performance criteria and revised items that exhibited borderline performance. The final version of the EMA consists of six core questions containing 25 items, and five supplemental questions containing 20 items.
Conclusions and implications The EMA is a pedagogical tool supported by a wide range of validation evidence. Instructors can use it as a pre/post measure of student learning in an EvMed course to inform curriculum revision, or as a test bank to draw upon when developing in-class assessments, quizzes or exams.
-
This full research paper documents assessment definitions from engineering faculty members, mainly from Research 1 universities. Assessments are essential components of the engineering learning environment, and how engineering faculty make decisions about assessments in their classroom is a relatively understudied topic in engineering education research. Exploring how engineering faculty think and implement assessments through the mental model framework can help address this research gap. The research documented in this paper focuses on analyzing data from an informational questionnaire that is part of a larger study to understand how the participants define assessments through methods inspired by mixed method strategies. These strategies include descriptive statistics on demographic findings and Natural Language Processing (NLP) and coding on the open-ended response question asking the participants to define assessments, which yielded cluster themes that characterize the definitions. Findings show that while many participants defined assessments in relation to measuring student learning, other substantial aspects include benchmarking, assessing student ability and competence, and formal evaluation for quality. These findings serve as foundational knowledge toward deeper exploration and understanding of assessment mental models of engineering faculty that can begin to address the aforementioned research gap on faculty assessment decisions in classrooms.more » « less