skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Success and Retention of Students Using Multiple-Attempt Testing in Fundamental Engineering Courses: Dynamics and Thermodynamics
The Success and Retention of Students using Multiple-Attempt Testing in Fundamental Engineering Courses: Dynamics and Thermodynamics First name Last name1, First name Last name 1, First name Last name 2, First name Last name 2, First name Last name 3, and First name Last name 1 1Department One 2Department two 3Department Three University Abstract The notion behind Multiple-Attempt Testing continues to be investigated for its benefits in terms of students’ overall success and their retention in fundamental engineering courses. Two engineering courses were delivered in mixed-mode in Spring 2023 (post-COVID): Dynamics and Thermodynamics, whose results were compared to the same courses given in the same semester, four years earlier, delivered in mixed-mode in Spring 2019 (pre-COVID). All four courses were large classes ranging from 167 students for Spring 2023 in Dynamics to 267 students in Thermodynamics for the same Spring 2023 semester. For both courses, there were three tests during the semester. In Spring 2019, students were given a five-day window to conduct their tests in the testing center (TC). Facilitated by the Learning Management System (LMS), the grades were instantly uploaded into CANVAS. Once the test closed, students were allowed to see their work with a teaching-assistant to learn from mistakes and claim some partial credit where possible. However, in Spring 2023, for both courses, students were given three tests during the semester with three attempts each, as well as a make-up final cumulative examination, also with three attempts, for those who wanted to improve their grades. No partial credit was given in any attempt of any test or the final examination. Each attempt was open for two days and the students were allowed to see their tests after each attempt, learn from mistakes, and prepare better for the next attempt. The effectiveness of this testing-interwoven-learning method lies in the fact that students are comfortable and less anxious to do their tests knowing they have other chances, can learn from their mistakes and focus their attention on their weaknesses, enhance their knowledge, and do better in the next attempt. With this self-paced method students learn a lot on their own given the amount of videos provided them. The study shows a substantial decrease in the failure rate, 65% and the overall DWF decreased by more than 40% in both courses. This suggests students aspired to do well in every attempt, or even if they failed all three tests, they would still have a final examination that could save them, which reduced the overall DWF. A survey was also conducted, revealing more than 70% of students preferred this method of testing and learning in future courses.  more » « less
Award ID(s):
2225208
PAR ID:
10531366
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
ASEE Conferences
Date Published:
Subject(s) / Keyword(s):
Multiple-Attempt Testing, Large Dynamics Classes, Students’ success and retention, CANVAS.
Format(s):
Medium: X
Location:
Portland, Oregon
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    In mechanics, the standard 3-credit, 45-hour course is sufficient to deliver standard lectures with prepared examples and questions. Moreover, it is not only feasible, but preferable, to employ any of a variety of active learning and teaching techniques. Nevertheless, even when active learning is strategically used, students and instructors alike experience pressure to accomplish their respective learning and teaching goals under the constraints of the academic calendar, raising questions as to whether the allocated time is sufficient to enable authentic learning. One way to assess learning progress is to examine the learning cycles through which students attempt, re-think, and re-attempt their work. This article provides data to benchmark the time required to learn key Statics concepts based on results of instruction of approximately 50 students in a Statics class at a public research university during the Fall 2020 semester. Two parallel techniques are employed to foster and understand student learning cycles. • Through a Mastery Based Learning model, 15 weekly pass/fail “Mastery Tests” are given. Students who do not pass may re-test with a different but similar test on the same topic each week until the semester’s conclusion. The tests are highly structured in that they are well posed and highly focused. For example, some tests focus only on drawing Free Body Diagrams, with no equations or calculations. Other tests focus on writing equilibrium equations from a given Free Body Diagram. Passing the first six tests is required to earn the grade of D; passing the next three for C; the next three for B; and the final three for A. Evaluations include coding of student responses to infer student reasoning. Learning cycles occur as students repeat the same topics, and their progress is assessed by passing rates and by comparing evolving responses to the same test topics. • Concept Questions that elicit qualitative responses and written explanations are deployed at least weekly. The learning cycle here consists of students answering a question, seeing the overall class results (but without the correct answer), having a chance to explore the question with other students and the instructor, and finally an opportunity to re-answer the same question, perhaps a few minutes or up to a couple days later. Sometimes, that same question is given a third time to encourage further effort or progress. To date, results from both cycles appear to agree on one important conclusion: the rate of demonstrated learning is quite low. For example, each Mastery Test has a passing rate of 20%-30%, including for students with several repeats. With the Concept Questions, typically no more than half of the students who answered incorrectly change to the correct answer by the time of the final poll. The final article will provide quantitative and qualitative results from each type of cycle, including tracking coded responses on Mastery Tests, written responses on Concept Questions, and cross-comparisons thereof. Additional results will be presented from student surveys. Since the Mastery Tests and Concept Questions follow typical Statics topics, this work has potential to lead to a standardized set of benchmarks and standards for measuring student learning – and its rate – in Statics. 
    more » « less
  2. The pandemic of COVID-19 is disrupting engineering education globally, at all levels of education.While distance education is nothing new, the pandemic of COVID-19 forced instructors to rapidly move their courses online whether or not they had ever received prior training in online education. In particular, there is very little literature to guide instructors in supporting students in online engineering design or project-based courses. The purpose of this research is to examine engineering students’ report of social support in their project and design-based courses at a large research university during the move to online instruction due to COVID-19in the Spring 2020 semester and to provide recommendations for instructors teaching these types of courses online in the future.Our study is framed by social constructivism and social capital theory.We surveyed undergraduate engineering and engineering technology students(n=235) across undergraduate levels during the final week of the Spring 2019 semester.Survey questions included open-ended prompts about social supports and overall experience with the transition to online learning as well as name and resource generator questions focused on specific people and types of interactions that changed during the pandemic. We used qualitative content analysis of the open-ended responses along with comparisons of the name and resource generator to develop recommendations for instructors.Recommendations to increase students’ social supports include:facilitating informal conversations between students and between students and the instructional team, grouping students located in the same time zones in teams, facilitating co-working sessions for students, establishing weekly structure, and utilizing some synchronous components (e.g., virtual office hours). 
    more » « less
  3. A cost-effective, secure, and portable electronic instrumentation equipment is used in Experiment Centric Pedagogy (ECP), formerly known as Mobile Hands-On Studio Technology and Pedagogy, as a teaching method for STEM subjects both inside and outside of the classroom. Since the Spring of 2020, ECP has been integrated into two Industrial Engineering (IE) courses: Thermodynamics and Materials Engineering. This has been done in various ways, including through student use at home and in-class demonstrations and teaching labs. During the most recent academic session (Fall 2021–Spring 2022), the effects of practical home-based experimentation and lab activities on students' attitudes, interests, and performance were examined for the Engineering Thermodynamics course. The outcomes of a survey known as the Motivated Strategies for Learning Questionnaires (MLSQ), which was given to 51 students, demonstrated better improvements in the student's motivation, epistemic, and perceptual curiosity, three crucial characteristics linked to their success. Along with the MLSQ, the Classroom Observation Protocol for Undergraduate Students (COPUS) assesses active learning in Industrial Engineering courses, and quantitative and qualitative data on the significant components of student achievement were gathered. Results obtained show that using ECP has improved students' awareness of material properties and increased their interest in learning about the thermodynamics concept of heat transfer in connection to various solid materials. 
    more » « less
  4. Computer-based testing is a powerful tool for scaling exams in large lecture classes. The decision to adopt computer-based testing is typically framed as a tradeoff in terms of time; time saved by auto-grading is reallocated as time spent developing problem pools, but with significant time savings. This paper seeks to examine the tradeoff in terms of accuracy in measuring student understanding. While some exams (e.g., multiple choice) are readily portable to a computer-based format, adequately porting other exam types (e.g., drawings like FBDs or worked problems) can be challenging. A key component of this challenge is to ask “What is the exam actually able to measure?” In this paper the authors will provide a quantitative and qualitative analysis of student understanding measurements via computer-based testing in a sophomore level Solid Mechanics course. At Michigan State University, Solid Mechanics is taught using the SMART methodology. SMART stands for Supported Mastery Assessment through Repeated Testing. In a typical semester, students are given 5 exams that test their understanding of the material. Each exam is graded using the SMART rubric which awards full points for the correct answer, some percentage for non-conceptual errors, and zero points for a solution that has a conceptual error. Every exam is divided into four sections; concept, simple, average, and challenge. Each exam has at least one retake opportunity, for a total of 10 written tests. In the current study, students representing 10% of the class took half of each exam in Prairie Learn, a computer-based auto-grading platform. During this exam, students were given instant feedback on submitted answers (correct or incorrect) and given an opportunity to identify their mistakes and resubmit their work. Students were provided with scratch paper to set up the problem and work out solutions. After the exam, the paper-based work was compared with the computer submitted answers. This paper examines what types of mistakes (conceptual and non-conceptual) students were able to correct when feedback was provided. The answer is dependent on the type and difficulty of the problem. The analysis also examines whether students taking the computer-based test performed at the same level as their peers who took the paper-based exams. Additionally, student feedback is provided and discussed. 
    more » « less
  5. This full research paper explores how second-chance testing can be used as a strategy for mitigating students’ test anxiety in STEM courses, thereby boosting students’ performance and experiences. Second-chance testing is a testing strategy where students are given an opportunity to take an assessment twice. We conducted a mixed-methods study to explore second-chance testing as a potential solution to test anxiety. First, we interviewed a diverse group of STEM students (N = 23) who had taken courses with second-chance testing to ask about the stress and anxiety associated with testing. We then administered a survey on test anxiety to STEM students in seven courses that offered second-chance tests at Midwestern University (N = 448). We found that second-chance testing led to a 30% reduction in students’ reported test anxiety. Students also reported reduced stress throughout the semester, even outside of testing windows, due to the availability of second-chance testing. Our study included an assortment of STEM courses where second-chance testing was deployed, which indicates that second-chance testing is a viable strategy for reducing anxiety in a variety of contexts. We also explored whether the resultant reduction in test anxiety led to student complacency, encouraged procrastination, or other suboptimal student behavior because of the extra chance provided. We found that the majority of students reported that they worked hard on their initial test attempts even when second-chance testing was available. 
    more » « less