skip to main content


Title: Refusing to Try: Characterizing Early Stopout on Student Assignments
A prominent issue faced by the education research community is that of student attrition. While large research efforts have been devoted to studying course-level attrition, widely referred to as dropout, less research has been focused on finer-grained assignment level attrition commonly observed in K-12 classrooms. This later instantiation of attrition, referred to in this paper as “stopout,” is characterized by students failing to complete their assigned work, but the cause of such behavior are not often known. This becomes a large problem for educators and developers of learning platforms as students who give up on assignments early are missing opportunities to learn and practice the material which may affect future performance on related topics; similarly, it is difficult for researchers to develop, and subsequently difficult for computer-based systems to deploy interventions aimed at promoting productive persistence once a student has ceased interaction with the software. This difficulty highlights the importance to understand and identify early signs of stopout behavior in order to provide aid to students preemptively to promote productive persistence in their learning. While many cases of student stopout may be attributable to gaps in student knowledge and indicative of struggle, student attributes such as grit and persistence may be further affected by other factors. This work focuses on identifying different forms of stopout behavior in the context of middle school math by observing student behaviors at the sub-problem level. We find that students exhibit disproportionate stopout on the first problem of their assignments in comparison to stopout on subsequent problems, identifying a behavior that we call “refusal,” and use the emerging patterns of student activity to better understand the potential causes underlying stopout behavior early in an assignment.  more » « less
Award ID(s):
1724889 1822830
NSF-PAR ID:
10095358
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 9th International Conference on Learning Analytics and Knowledge
Page Range / eLocation ID:
391 to 400
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The increased usage of computer-based learning platforms and online tools in classrooms presents new opportunities to not only study the underlying constructs involved in the learning process, but also use this information to identify and aid struggling students. Many learning platforms, particularly those driving or supplementing instruction, are only able to provide aid to students who interact with the system. With this in mind, student persistence emerges as a prominent learning construct contributing to students success when learning new material. Conversely, high persistence is not always productive for students, where additional practice does not help the student move toward a state of mastery of the material. In this paper, we apply a transfer learning methodology using deep learning and traditional modeling techniques to study high and low representations of unproductive persistence. We focus on two prominent problems in the fields of educational data mining and learner analytics representing low persistence, characterized as student "stopout," and unproductive high persistence, operationalized through student "wheel spinning," in an effort to better understand the relationship between these measures of unproductive persistence (i.e. stopout and wheel spinning) and develop early detectors of these behaviors. We find that models developed to detect each within and across-assignment stopout and wheel spinning are able to learn sets of features that generalize to predict the other. We further observe how these models perform at each learning opportunity within student assignments to identify when interventions may be deployed to best aid students who are likely to exhibit unproductive persistence. 
    more » « less
  2. null (Ed.)
    The pedagogical approach of Zone of Proximal Development (ZPD) is based on the belief that effective learning occurs when students are challenged just beyond the level they can do on their own. An expert teacher looking over the shoulder of a student would give just the right amount of hints; too much hinting gives away the solution which deprives the student of the productive struggle that is needed for learning new concepts. Alternatively, no hinting may leave the student frustrated to the point where they give up. A key challenge with online learning is how to provide the right level of hints as if an expert teacher were there. This paper describes the evolution of hints for spatial visualization training using a mobile app. Students sketch orthographic and isometric sketches, which are automatically graded by the app. When a student draws an assignment incorrectly, they are provided with the option of a hint or peeking at the solution. This paper discusses the development of the app feedback and how it has impacted student behavior in using the app. In a first implementation, some students who excessively peeked at the solution without trying very hard on the assignments, did not significantly improve their spatial visualization ability as measured by the standardized PSVT:R test. To address the over-use of peeking, gamification was added that rewarded students to try on their own before looking at a hint or peek. In this paper, we look at a classroom trial that used a version of the spatial visualization mobile app with gamification. In general, gamification increased the post PSVT:r test scores. However, there was also a partial negative effect that and we see instances where the gamification lead to student frustration and waste of time because they avoided using hints to maximize their gamification points. We realized that the encompassing the knowledge of an expert teacher in providing hints just when needed, is difficult to implement in an algorithm. Specific examples are presented along with proposed improvements to the in-app hints. The final paper will include data comparing results of a class in January 2018 that used the original hints, with a class in January 2019 that will use the newer hints. 
    more » « less
  3. Engineering students develop competencies in fundamental engineering courses (FECs) that are critical for success later in advanced courses and engineering practice. Literature on the student learning experience, however, associate these courses with challenging educational environments (e.g., large class sizes) and low student success rates. Challenging educational environments are particularly prevalent in large, research-intensive institutions. To address concerns associated with FECs, it is important to understand prevailing educational environments in these courses and identify critical points where improvement and change is needed. The Academic Plan Model provides a systematic way to critically examine the factors that shape the educational environment. It includes paths for evaluation and adjustment, allowing educational environments to continuously improve. The Model may be applied to various levels in an institution (e.g., course, program, college), implying that a student’s entire undergraduate learning experience is the result of several enacted academic plans that are interacting with each other. Thus, understanding context-specific factors in a specific educational environment will yield valuable information affecting the undergraduate experience, including concerns related to attrition and persistence. In order to better understand why students are not succeeding in large foundational engineering courses, we developed a form to collect data on why students withdraw from certain courses. The form was included as a requirement during the withdrawal process. In this paper, we analyzed course withdrawal data from several academic departments in charge of teaching large foundational engineering courses, and institutional transcript data for the Spring 2018 semester. The withdrawal dataset includes the final grades that students expected to receive in the course and the factors that influenced their decision to withdraw. Institutional transcript data includes demographic information (e.g., gender, major), admissions data (e.g., SAT scores, high school GPA), and institutional academic information (e.g., course grades, cumulative GPA). Results provide a better understanding of the main reasons students decide to withdraw from a course, including having unsatisfactory grades, not understanding the professor, and being overwhelmed with work. We also analyzed locus of control for the responses, finding that the majority of students withdrawing courses consider that the problem is outside of their control and comes from an external source. We provide analysis by different departments and different specific courses. Implications for administrators, practitioners, and researchers are provided. 
    more » « less
  4. Miller, Eva (Ed.)
    The COVID-19 pandemic disrupted global educational systems with institutions transitioning to e-learning. Undergraduate STEM students complained about lowered motivation to learn and complete STEM course requirements. To better prepare for more effective STEM education delivery during high-risk conditions such as pandemics, it is important to understand the learning motivation challenges (LMCs) experienced by students. As part of a larger national research study investigating decision-making in undergraduate STEM students during COVID-19, the purpose of this research is to examine LMCs experienced by undergraduate STEM students. One hundred and ninety students from six U.S. institutions participated in Qualtrics-based surveys. Utilizing a five-point Likert scale, respondents ranked the extent to which they agreed to LMC statements. Using Qualtrics Data Analysis tools and MS Excel, data from 130 useable surveys was analyzed utilizing descriptive and inferential statistics. Results revealed that regardless of classification, GPA, age, or race, STEM students experienced LMCs. The top five LMCs were: (1) Assignment Overloads; (2) Lack of In-Person Peer Interactions; (3) Uncaring Professors; (4) Lack of In-Person Professor Interactions; and (5) Lack of In-Person Laboratory Experiences. Significant relationships existed between three characteristics (GPA, classification, and age) and few LMCs to include assignment overloads. Students tended to attribute lowered motivation to Institutional and Domestic challenges which were typically out of their control, rather than to Personal challenges which were typically within their control. Crosstab analysis suggested that Sophomores, Asians, as well as students with GPAs between 2.00 and 2.49 and aged 41 to 50 years may be the most vulnerable due to higher dependence on traditional in-person STEM educational environments. Early identification of the most vulnerable students should be quickly followed by interventions. Increased attention towards sophomores may reduce exacerbation of potential sophomore slump and middle-child syndrome. All STEM students require critical domestic, institutional, and personal resources. Institutions should strengthen students’ self-regulation skills and provide increased opportunities for remote peer interactions. Training of faculty and administrators is critical to build institutional capacity to motivate and educate STEM students with diverse characteristics in e-learning environments. Pass/fail policies should be carefully designed and implemented to minimize negative impacts on motivation. Employers should expand orientation and mentoring programs for entry-level employees, particularly for laboratory-based tasks. Research is needed to improve the delivery of STEM laboratory e-learning experiences. Findings inform future research, as well as best practices for improved institutional adaptability and resiliency. These will minimize disruptions to student functioning and performance, reduce attrition, and strengthen progression into the STEM workforce during high-risk conditions such as pandemics. With caution, findings may be extended to non-STEM and non-student populations. 
    more » « less
  5. This work falls under the evidence-based practice type of paper. Online undergraduate engineering education is rapidly increasing in use. The online format not only provides greater flexibility and ease of access for students, but also has lower costs for universities when compared to face-to-face courses. Even with these generally positive attributes, online courses face challenges with respect to student attrition. Numerous studies have shown that the dropout rate in online courses is higher than that for in-person courses, and topics related to online student persistence remain of interest. Data describing student interactions with their Learning Management System (LMS) provide an important lens through which online student engagement and corresponding persistence decisions can be studied, but many engineering education researchers may lack experience in working with LMS interaction data. The purpose of this paper is to provide a concrete example for other engineering education researchers of how LMS interaction data from online undergraduate engineering courses can be prepared for analysis. The work presented here is part of a larger National Science Foundation-funded study dedicated to developing a theoretical model for online undergraduate engineering student persistence based on student LMS interaction activities and patterns. Our sample dataset includes six courses, two from electrical engineering and four from engineering management, offered during the fall 2018 semester at a large, public southwestern university. The LMS interaction data provides details about students’ navigations to and submissions of different course elements including quizzes, assignments, discussion forums, wiki pages, attachments, modules, the syllabus, the gradebook, and course announcements. Relatedly, the features created from the data in this study can be classified into three categories: 1) learning page views, which capture student interactions with course content, 2) procedural page views, which capture student navigation to course management activities, and 3) social page views, which capture learner-to-learner and learner-to-instructor interactions. The full paper will provide the rationale and details involved in choices related to data cleaning, manipulation, and feature creation. A complete list of features will also be included. These features will ultimately be combined with associative classification to discover relationships between student-LMS interactions and persistence decisions. 
    more » « less