skip to main content


Title: Exploring Novices' Struggle and Progress During Programming Through Data-Driven Detectors and Think-Aloud Protocols
Many students struggle when they are first learning to program. Without help, these students can lose confidence and negatively assess their programming ability, which can ultimately lead to dropouts. However, detecting the exact moment of student struggle is still an open question in computing education. In this work, we conducted a think-aloud study with five high-school students to investigate the automatic detection of progressing and struggling moments using a detector algorithm (SPD). SPD classifies student trace logs into moments of struggle and progress based on their similarity to prior students' correct solutions. We explored the extent to which the SPD-identified moments of struggle aligned with expert-identified moments based on novices' verbalized thoughts and programming actions. Our analysis results suggest that SPD can catch students' struggling and progressing moments with a 72.5% F1-score, but room remains for improvement in detecting struggle. Moreover, we conducted an in-depth examination to discover why discrepancies arose between expert-identified and detector-identified struggle moments. We conclude with recommendations for future data-driven struggle detection systems.  more » « less
Award ID(s):
1917885
PAR ID:
10528599
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE
Date Published:
ISBN:
979-8-3503-2946-9
Page Range / eLocation ID:
179 -183
Subject(s) / Keyword(s):
novice programming computer science education struggle detection progress detection
Format(s):
Medium: X
Location:
Washington, DC, USA
Sponsoring Org:
National Science Foundation
More Like this
  1. Student perceptions of programming can impact their experiences in introductory computer science (CS) courses. For example, some students negatively assess their own ability in response to moments that are natural parts of expert practice, such as using online resources or getting syntax errors. Systems that automatically detect these moments from interaction log data could help us study these moments and intervene when the occur. However, while researchers have analyzed programming log data, few systems detect pre-defined moments, particularly those based on student perceptions. We contribute a new approach and system for detecting programming moments that students perceive as important from interaction log data. We conducted retrospective interviews with 41 CS students in which they identified moments that can prompt negative self-assessments. Then we created a qualitative codebook of the behavioral patterns indicative of each moment, and used this knowledge to build an expert system. We evaluated our system with log data collected from an additional 33 CS students. Our results are promising, with F1 scores ranging from 66% to 98%. We believe that this approach can be applied in many domains to understand and detect student perceptions of learning experiences. 
    more » « less
  2. Many undergraduate students encounter struggle as they navigate academic, financial, and social contexts of higher education. The transition to emergency online instruction during the Spring of 2020 due to the COVID-19 pandemic exacerbated these struggles. To assess college students’ struggles during the transition to online learning in undergraduate biology courses, we surveyed a diverse collection of students ( n = 238) at an R2 research institution in the Southeastern United States. Students were asked if they encountered struggles and whether they were able to overcome them. Based on how students responded, they were asked to elaborate on (1) how they persevered without struggle, (2) how they were able to overcome their struggles, or (3) what barriers they encountered that did not allow them to overcome their struggles. Each open-ended response was thematically coded to address salient patterns in students’ ability to either persevere or overcome their struggle. We found that during the transition to remote learning, 67% of students experienced struggle. The most reported struggles included: shifts in class format, effective study habits, time management, and increased external commitments. Approximately, 83% of those struggling students were able to overcome their struggle, most often citing their instructor’s support and resources offered during the transition as reasons for their success. Students also cited changes in study habits, and increased confidence or belief that they could excel within the course as ways in which they overcame their struggles. Overall, we found no link between struggles in the classroom and any demographic variables we measured, which included race/ethnicity, gender expression, first-generation college students, transfer student status, and commuter student status. Our results highlight the critical role that instructors play in supporting student learning during these uncertain times by promoting student self-efficacy and positive-growth mindset, providing students with the resources they need to succeed, and creating a supportive and transparent learning environment. 
    more » « less
  3. Undergraduate programs in computer science (CS) face high dropout rates, and many students struggle while learning to program. Studies show that perceived programming ability is a significant factor in students' decision to major in CS. Fortunately, psychology research shows that promoting the growth mindset, or the belief that intelligence grows with effort, can improve student persistence and performance. However, mindset interventions have been less successful in CS than in other domains. We conducted a small-scale interview study to explore how CS students talk about their intelligence, mindsets, and programming behaviors. We found that students' mindsets rarely aligned with definitions in the literature; some present mindsets that combine fixed and growth attributes, while others behave in ways that do not align with their mindsets. We also found that students frequently evaluate their self-efficacy by appraising their programming intelligence, using surprising criteria like typing speed and ease of debugging to measure ability. We conducted a survey study with 103 students to explore these self-assessment criteria further, and found that students use varying and conflicting criteria to evaluate intelligence in CS. We believe the criteria that students choose may interact with mindsets and impact their motivation and approach to programming, which could help explain the limited success of mindset interventions in CS. 
    more » « less
  4. Programming can be an emotional experience, particularly for undergraduate students who are new to computer science. While researchers have interviewed novice programmers about their emotional experiences, it can be difficult to pinpoint the specific emotions that occur during a programming session. In this paper, we argue that electrodermal activity (EDA) sensors, which measure the physiological changes that are indicative of an emotional reaction, can provide a valuable new data source to help study student experiences. We conducted a study with 14 undergraduate students in which we collected EDA data while they worked on a programming problem. This data was then used to cue the participants’ recollections of their emotions during a retrospective interview about the programming experience. Using this methodology, we identified 21 distinct events that triggered student emotions, such as feeling anxiety due to a lack of perceived progress on the problem. We also identified common patterns in EDA data across multiple participants, such as a drop in their physiological reaction after developing a plan, corresponding with a calmer emotional state. These findings provide new information about how students experience programming that can inform research and practice, and also contribute initial evidence of the value of EDA data in supporting studies of emotions while programming. 
    more » « less
  5. Prediction of student performance in Introductory programming courses can assist struggling students and improve their persistence. On the other hand, it is important for the prediction to be transparent for the instructor and students to effectively utilize the results of this prediction. Explainable Machine Learning models can effectively help students and instructors gain insights into students’ different programming behaviors and problem-solving strategies that can lead to good or poor performance. This study develops an explainable model that predicts students’ performance based on programming assignment submission information. We extract different data-driven features from students’ programming submissions and employ a stacked ensemble model to predict students’ final exam grades. We use SHAP, a game-theory-based framework, to explain the model’s predictions to help the stakeholders understand the impact of different programming behaviors on students’ success. Moreover, we analyze the impact of important features and utilize a combination of descriptive statistics and mixture models to identify different profiles of students based on their problem-solving patterns to bolster explainability. The experimental results suggest that our model significantly outperforms other Machine Learning models, including KNN, SVM, XGBoost, Bagging, Boosting, and Linear regression. Our explainable and transparent model can help explain students’ common problem-solving patterns in relationship with their level of expertise resulting in effective intervention and adaptive support to students. 
    more » « less