skip to main content


Title: Why do CS1 Students Think They're Bad at Programming?: Investigating Self-Efficacy and Self-Assessments at Three Universities
Undergraduate computer science (CS) programs often suffer from high dropout rates. Recent research suggests that self-efficacy -- an individual's belief in their ability to complete a task -- can influence whether students decide to persist in CS. Studies show that students' self-assessments affect their self-efficacy in many domains, and in CS, researchers have found that students frequently assess their programming ability based on their expectations about the programming process. However, we know little about the specific programming experiences that prompt the negative self-assessments that lead to lower self-efficacy. In this paper, we present findings from a survey study with 214 CS1 students from three universities. We used vignette-style questions to describe thirteen programming moments which may prompt negative self-assessments, such as getting syntax errors and spending time planning. We found that many students across all three universities reported that they negatively self-assess at each of the thirteen moments, despite the differences in curriculum and population. Furthermore, those who report more frequent negative self-assessments tend to have lower self-efficacy. Finally, our findings suggest that students' perceptions of professional programming practice may influence their expectations and negative self-assessments. By reducing the frequency that students self-assess negatively while programming, we may be able to improve self-efficacy and decrease dropout rates in CS.  more » « less
Award ID(s):
1755628
NSF-PAR ID:
10215920
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the 2020 ACM Conference on International Computing Education Research
Page Range / eLocation ID:
170 to 181
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Student perceptions of programming can impact their experiences in introductory computer science (CS) courses. For example, some students negatively assess their own ability in response to moments that are natural parts of expert practice, such as using online resources or getting syntax errors. Systems that automatically detect these moments from interaction log data could help us study these moments and intervene when the occur. However, while researchers have analyzed programming log data, few systems detect pre-defined moments, particularly those based on student perceptions. We contribute a new approach and system for detecting programming moments that students perceive as important from interaction log data. We conducted retrospective interviews with 41 CS students in which they identified moments that can prompt negative self-assessments. Then we created a qualitative codebook of the behavioral patterns indicative of each moment, and used this knowledge to build an expert system. We evaluated our system with log data collected from an additional 33 CS students. Our results are promising, with F1 scores ranging from 66% to 98%. We believe that this approach can be applied in many domains to understand and detect student perceptions of learning experiences. 
    more » « less
  2. Undergraduate programs in computer science (CS) face high dropout rates, and many students struggle while learning to program. Studies show that perceived programming ability is a significant factor in students' decision to major in CS. Fortunately, psychology research shows that promoting the growth mindset, or the belief that intelligence grows with effort, can improve student persistence and performance. However, mindset interventions have been less successful in CS than in other domains. We conducted a small-scale interview study to explore how CS students talk about their intelligence, mindsets, and programming behaviors. We found that students' mindsets rarely aligned with definitions in the literature; some present mindsets that combine fixed and growth attributes, while others behave in ways that do not align with their mindsets. We also found that students frequently evaluate their self-efficacy by appraising their programming intelligence, using surprising criteria like typing speed and ease of debugging to measure ability. We conducted a survey study with 103 students to explore these self-assessment criteria further, and found that students use varying and conflicting criteria to evaluate intelligence in CS. We believe the criteria that students choose may interact with mindsets and impact their motivation and approach to programming, which could help explain the limited success of mindset interventions in CS. 
    more » « less
  3. University introductory computer science courses (CS1) present many challenges. Students enter CS1 with varying backgrounds and many are evaluating their potential for success in the major. Students often negatively self-assess in response to natural programming moments, such as getting a syntax error, but we have a limited understanding of the mechanisms that drive these self-assessments. In this paper, we study the differences in student assessments of themselves and their assessments of others in response to particular programming moments. We analyze survey data from 214 CS1 students, finding that many have a self-critical bias, evaluating themselves more harshly than others. We also found that women have a stronger self-critical bias, and that students tend to be more self-critical when the other is female. These insights can help us reduce the impact of negative self-assessments on student experiences. 
    more » « less
  4. Enrollment in computing at the college level has skyrocketed, and many institutions have responded by enacting competitive enrollment processes. However, little is known about the effects of enrollment policies on students' experiences. To identify relationships between those policies and students' experiences, we linked survey data from 1245 first-year students in 80 CS departments to a dataset of department policies. We found that competitive enrollment negatively predicts first-year students' perception of the computing department as welcoming, their sense of belonging, and their self-efficacy in computing. Both belonging and self-efficacy are known predictors of student retention in CS. In addition, these relationships are stronger for students without pre-college computing experience. Our classification of institutions as competitive is conservative, and false positives are likely. This biases our results and suggests that the negative relationships we found are an underestimation of the effects of competitive enrollment. 
    more » « less
  5. null (Ed.)
    As high school programs are increasingly incorporating engineering content into their curricula, a question is raised as to the impacts of those programs on student attitudes towards engineering, in particular engineering design. From a collegiate perspective, there is a related question as to how first-year engineering programs at the college level should adapt to a greater percentage of incoming students with prior conceptions about engineering design and how to efficaciously uncover what those conceptions may be. Further, there is a broader question within engineering design as to how various design experiences, especially introductory experiences, may influence student attitudes towards the subject and towards engineering more broadly. Student attitudes is a broad and well-studied area and a wide array of instruments have been shown to be valid and reliable assessments of various aspects of student motivation, self-efficacy, and interests. In terms of career interests, the STEM Career Interest Survey (STEM-CIS) has been widely used in grade school settings to gauge student intentions to pursue STEM careers, with a subscale focused on engineering. In self-efficacy and motivation, the Value-Expectancy STEM Assessment Scale (VESAS) is a STEM-focused adaptation of the broader Values, Interest, and Expectations Scale (VIES), which in turns builds upon Eccles’ Value-Expectancy model of self-efficacy. When it comes to engineering design, there have been a few attempts to develop more focused instruments, such as Carberry’s Design Self-Efficacy Instrument. For the purposes of this work, evaluating novice and beginning designer attitudes about engineering design, the available instruments were not found to assess the desired attributes. Design-focused instruments such as Carberry’s were too narrowly focused on the stages of the design process, many of which required a certain a priori knowledge to effectively evaluate. Broader instruments such as the VESAS were too focused on working and studying engineering, rather than doing or identifying with engineering. A new instrument, the Engineering Design Value-Expectancy Scale (EDVES) was developed to meet this need. In its current form the EDVES includes 38 items across several subscales covering expectancy of success in, perceived value of, and identification with engineering and design. This work presents the EDVES and discusses the development process of the instrument. It presents validity evidence following the Cook validation evidence model, including scoring, generalization, and extrapolation validity evidence. This validation study was conducted using pre- and post-course deployment with 192 first-year engineering students enrolled in a foundational engineering design course. 
    more » « less