skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning to Cheat: Quantifying Changes in Score Advantage of Unproctored Assessments Over Time
Proctoring educational assessments (e.g., quizzes and exams) has a cost, be it in faculty (and/or course staff) time or in money to pay for proctoring services. Previous estimates of the utility of proctoring (generally by estimating the score advantage of taking an exam without proctoring) vary widely and have mostly been implemented using an across subjects experimental designs and sometimes with low statistical power. We investigated the score advantage of unproctored exams versus proctored exams using a within-subjects design for N = 510 students in an on-campus introductory programming course with 5 proctored exams and 4 unproctored exams. We found that students scored 3.32 percentage points higher on questions on unproctored exams than on proctored exams (p < 0.001). More interestingly, however, we discovered that this score advantage on unproctored exams grew steadily as the semester progressed, from around 0 percentage points at the start of semester to around 7 percentage points by the end. As the most obvious explanation for this advantage is cheating, we refer to this behavior as the student population "learning to cheat". The data suggests that both more individuals are cheating and the average benefit of cheating is increasing over the course of the semester. Furthermore, we observed that studying for unproctored exams decreased over the course of the semester while studying for proctored exams stayed constant. Lastly, we estimated the score advantage by question type and found that our long-form programming questions had the highest score advantage on unproctored exams, but there are multiple possible explanations for this finding.  more » « less
Award ID(s):
1915257
PAR ID:
10200289
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the Seventh ACM Conference on Learning @ Scale
Page Range / eLocation ID:
197 to 206
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Battestilli, Lina; Rebelsky, Samuel A; Shoop, Libby (Ed.)
    We compare the exam security of three proctoring regimens of Bring-Your-Own-Device, synchronous, computer-based exams in a computer science class: online un-proctored, online proctored via Zoom, and in-person proctored. We performed two randomized crossover experiments to compare these proctoring regimens. The first study measured the score advantage students receive while taking un-proctored online exams over Zoom-proctored online exams. The second study measured the score advantage of students taking Zoom-proctored online exams over in-person proctored exams. In both studies, students took six 50-minute exams using their own devices, which included two coding questions and 8–10 non-coding questions. We find that students score 2.3% higher on non-coding questions when taking exams in the un-proctored format compared to Zoom proctoring. No statistically significant advantage was found for the coding questions. While most of the non-coding questions had randomization such that students got different versions, for the few questions where all students received the same exact version, the score advantage escalated to 5.2%. From the second study, we find no statistically significant difference between students’ performance on Zoom-proctored vs. in-person proctored exams. With this, we recommend educators incorporate some form of proctoring along with question randomization to mitigate cheating concerns in BYOD exams. 
    more » « less
  2. In response to the Covid-19 pandemic, educational institutions quickly transitioned to remote learning. The problem of how to perform student assessment in an online environment has become increasingly relevant, leading many institutions and educators to turn to online proctoring services to administer remote exams. These services employ various student monitoring methods to curb cheating, including restricted ("lockdown") browser modes, video/screen monitoring, local network traffic analysis, and eye tracking. In this paper, we explore the security and privacy perceptions of the student test-takers being proctored. We analyze user reviews of proctoring services' browser extensions and subsequently perform an online survey (n=102). Our findings indicate that participants are concerned about both the amount and the personal nature of the information shared with the exam proctoring companies. However, many participants also recognize a trade-off between pandemic safety concerns and the arguably invasive means by which proctoring services ensure exam integrity. Our findings also suggest that institutional power dynamics and students' trust in their institutions may dissuade students' opposition to remote proctoring. 
    more » « less
  3. null (Ed.)
    To defend against collaborative cheating in code writing questions, instructors of courses with online, asynchronous exams can use the strategy of question variants. These question variants are manually written questions to be selected at random during exam time to assess the same learning goal. In order to create these variants, currently the instructors have to rely on intuition to accomplish the competing goals of ensuring that variants are different enough to defend against collaborative cheating, and yet similar enough where students are assessed fairly. In this paper, we propose data-driven investigation into these variants. We apply our data-driven investigation into a dataset of three midterm exams from a large introductory programming course. Our results show that (1) observable inequalities of student performance exist between variants and (2) these differences are not just limited to score. Our results also show that the information gathered from our data-driven investigation can be used to provide recommendations for improving design of future variants. 
    more » « less
  4. We conducted an across-semester quasi-experimental study that compared students' outcomes under frequent and infrequent testing regimens in an introductory computer science course. Students in the frequent testing (4 quizzes and 4 exams) semester outperformed the infrequent testing (1 midterm and 1 final exam) semester by 9.1 to 13.5 percentage points on code writing questions. We complement these performance results with additional data from surveys, interviews, and analysis of textbook behavior. In the surveys, students report a preference for the smaller number of exams, but rated the exams in the frequent testing semester to be both less difficult and less stressful, in spite of the exams containing identical content. In the interviews, students predominantly indicated (1) that the frequent testing regimen encourages better study habits (e.g., more attention to work, less cramming) and leads to better learning, (2) that frequent testing reduces test anxiety, and (3) that the frequent testing regimen was more fair, but these opinions were not universally held. The students' impressions that the frequent testing regimen would lead to better study habits is borne out in our analysis of students' activities in the course's interactive textbook. In the frequent testing semester, students spent more time on textbook readings and appeared to answer textbook questions more earnestly (i.e., less "gaming the system'' by using hints and brute force). 
    more » « less
  5. In the United States, the onset of COVID-19 triggered a nationwide lockdown, which forced many universities to move their primary assessments from invigilated in-person exams to unproctored online exams. This abrupt change occurred midway through the Spring 2020 semester, providing an unprecedented opportunity to investigate whether online exams can provide meaningful assessments of learning relative to in-person exams on a per-student basis. Here, we present data from nearly 2,000 students across 18 courses at a large Midwestern University. Using a meta-analytic approach in which we treated each course as a separate study, we showed that online exams produced scores that highly resembled those from in-person exams at an individual level despite the online exams being unproctored—as demonstrated by a robust correlation between online and in-person exam scores. Moreover, our data showed that cheating was either not widespread or ineffective at boosting scores, and the strong assessment value of online exams was observed regardless of the type of questions asked on the exam, the course level, academic discipline, or class size. We conclude that online exams, even when unproctored, are a viable assessment tool. 
    more » « less