Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Computer-based testing is a powerful tool for scaling exams in large lecture classes. The decision to adopt computer-based testing is typically framed as a tradeoff in terms of time; time saved by auto-grading is reallocated as time spent developing problem pools, but with significant time savings. This paper seeks to examine the tradeoff in terms of accuracy in measuring student understanding. While some exams (e.g., multiple choice) are readily portable to a computer-based format, adequately porting other exam types (e.g., drawings like FBDs or worked problems) can be challenging. A key component of this challenge is to ask “What is the exam actually able to measure?” In this paper the authors will provide a quantitative and qualitative analysis of student understanding measurements via computer-based testing in a sophomore level Solid Mechanics course. At Michigan State University, Solid Mechanics is taught using the SMART methodology. SMART stands for Supported Mastery Assessment through Repeated Testing. In a typical semester, students are given 5 exams that test their understanding of the material. Each exam is graded using the SMART rubric which awards full points for the correct answer, some percentage for non-conceptual errors, and zero points for a solution that has a conceptual error. Every exam is divided into four sections; concept, simple, average, and challenge. Each exam has at least one retake opportunity, for a total of 10 written tests. In the current study, students representing 10% of the class took half of each exam in Prairie Learn, a computer-based auto-grading platform. During this exam, students were given instant feedback on submitted answers (correct or incorrect) and given an opportunity to identify their mistakes and resubmit their work. Students were provided with scratch paper to set up the problem and work out solutions. After the exam, the paper-based work was compared with the computer submitted answers. This paper examines what types of mistakes (conceptual and non-conceptual) students were able to correct when feedback was provided. The answer is dependent on the type and difficulty of the problem. The analysis also examines whether students taking the computer-based test performed at the same level as their peers who took the paper-based exams. Additionally, student feedback is provided and discussed.more » « less
-
In 2016, Michigan State University developed a new model of classroom education and assessment in their Mechanics of Materials course. This model used a modified mastery approach that stresses formative assessment, guidance in the problem-solving process, and structured student reflection. We now refer to this new approach as SMART Assessment - short for Supported Mastery Assessment using Repeated Testing. The effects of this model have been very positive, and results on overall student success in Mechanics of Materials have been presented in full at prior ASEE conferences. In this paper, we focus on the effects of this new assessment model on the performance of students who may be at greater risk due to their first-generation status or economic disadvantage, while accounting for other measures such as incoming GPA and performance in the prerequisite course, Statics. The evaluation was conducted across 3.5 academic years and involved 1275 students divided among 9 experimental sections and 6 control sections. Statistical analysis indicated that there were no significant differences between the performance indices for students in the SMART sections based on their parents’ history of university education or their eligibility to receive a Pell Grant. While students in the Traditional section tended to have higher grades in ME222, this cannot be compared directly to the grades in the SMART section due to the difference in grading framework. Previous work, however, has indicated that students who complete the SMART framed sections have a deeper understanding of the course material, as demonstrated by their improved performance on common final exam problems that were evaluated with a mastery-focused rubric.more » « less
-
null (Ed.)In the spring of 2020, universities across America, and the world, abruptly transitioned to online learning. The online transition required faculty to find novel ways to administer assessments and in some cases, for students to utilize novel ways of cheating in their classes. The purpose of this paper is to provide a retrospective on cheating during online exams in the spring of 2020. It specifically looks at honor code violations in a sophomore level engineering course that enrolled more than 200 students. In this particular course, four pre-COVID assessments were given in class and six mid-COVID assessments were given online. This paper examines the increasing rate of cheating on these assessments and the profiles of the students who were engaged in cheating. It compares students who were engaged in violations of the honor code by uploading exam questions vs. those who those who looked at solutions to uploaded questions. This paper also looks at the abuse of Chegg during exams and the responsiveness of Chegg’s honor code team. It discusses the effectiveness of Chegg’s user account data in pursuing academic integrity cases. Information is also provided on the question response times for Chegg tutors in answering exam questions and the actual efficacy of cheating in this fashion.more » « less
-
Students achieve functional knowledge retention through active, spaced repetition of concepts through homework, quizzes, and lectures. True knowledge retention is best achieved through proper comprehension of the concept. In the engineering curriculum, courses are sequenced into prerequisite chains of three to five courses per subfield –- a design aimed at developing and reinforcing core concepts over time. Knowledge retention of these prerequisite concepts is important for the next course. In this project, concept review quizzes were used to identify the gaps and deficiencies in students' prerequisite knowledge and measure improvement after a concept review intervention. Two quizzes (pre-intervention and post-intervention) drew inspiration from the standard concept inventories for fundamental concepts and include concepts such as Free Body Diagrams, Contact and Reaction Forces, Equilibrium Equations, and Calculation of the Moment. Concept inventories are typically multiple-choice, in this evaluation the concept questions were open-ended. A clear rubric was created to identify the missing prerequisite concepts in the students' knowledge. These quizzes were deployed in Mechanics of Materials, a second-level course in the engineering mechanics curriculum (the second in a sequence of four courses: Statics, Mechanics of Materials, Mechanical Design, and Kinematic Design). The pre-quiz was administered (unannounced) at the beginning of the class. The class then actively participated in a 30-minute concept review. A different post-quiz was administered in the same class period after the review. Quizzes were graded with a rubric to measure the effect of the concept review intervention on the students’ knowledge demonstration and calculations. The study evaluated four major concepts: free body diagrams, boundary reaction forces (fixed, pin, and contact), equilibrium, and moment calculation. Students showed improvements of up to 39\% in the case of drawing a free body diagram with fixed boundary condition, but continued to struggle with free body diagram involving contact forces. This study was performed at a large public institution in a class size of 240 students. A total of 224 students consented to the use of their data for this study (and attended class on the day of the intervention). The pre-quiz is used to determine the gaps (or deficiencies) in conceptual understanding among students. The post-quiz measures the response to the review and is used to determine which concept deficiencies were significantly improved by the review, and which concept deficiencies were not significantly improved by the concept review. This study presents a concept quiz and associated rubric for measuring student improvement resulting from an in-class intervention (concept review). It quantifies a significant improvement in the students’ retrieval of their prerequisite knowledge after a concept review session. This approach, therefore, has utility for improving knowledge retention in programs with a similar, sequenced course design.more » « less
An official website of the United States government

Full Text Available