skip to main content


Title: Understanding and Measuring Incremental Development in CS1
Incremental development is the process of writing a small snippet of code and testing it before moving on. For students in introductory programming courses, the value of incremental development is especially higher as they may suffer from more syntax errors, lack the proficiency to address complicated bugs, and may be more prone to frustration when struggling to correct code. However, to evaluate the effectiveness of interventions that aim to teach programming processes such as incremental development, we need to develop measures to assess such processes. In this paper, we present a way to measure incremental development. By qualitatively analyzing 15 student coding interviews, we identified common behaviors in the programming process that relate to incremental development. We then leveraged a dataset of over 1000 development sessions -- about 52,000 code snapshots at compilation time -- to automatically detect the common behaviors identified in our qualitative analysis. Finally, we crafted a formal metric, called the ``Measure of Incremental Development’' (MID), to quantify how effectively a student used incremental development during a programming session. The MID detects common non-incremental development patterns such as excessive debugging after large additions of code to automatically assess a sequence of snapshots. The MID aligns with human evaluations of incrementality with over 80% accuracy. Our metric enables new research directions and interventions focused on improving students' development practices.  more » « less
Award ID(s):
2044473
NSF-PAR ID:
10398969
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ACM Technical Symposium on Computer Science Education (SIGCSE)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    As Scratch has become one of the most popular educational programming languages, understanding its common programming idioms can benefit both computing educators and learners. This understanding can fine-tune the curricular development to help learners master the fundamentals of writing idiomatic code in their programming pursuits. Unfortunately, the research community’s understanding of what constitutes idiomatic Scratch code has been limited. To help bridge this knowledge gap, we systematically identified idioms as based on canonical source code, presented in widely available educational materials. We implemented a tool that automatically detects these idioms to assess their prevalence within a large dataset of over 70K Scratch projects in different experience backgrounds and project categories. Since communal learning and the practice of remixing are one of the cornerstones of the Scratch programming community, we studied the relationship between common programming idioms and remixes. Having analyzed the original projects and their remixes, we observed that different idioms may associate with dissimilar types of code changes. Code changes in remixes are desirable, as they require a meaningful programming effort that spurs the learning process. The ability to substantially change a project in its remixes hinges on the project’s code being easy to understand and modify. Our findings suggest that the presence of certain common idioms can indeed positively impact the degree of code changes in remixes. Our findings can help form a foundation of what comprises common Scratch programming idioms, thus benefiting both introductory computing education and Scratch programming tools. 
    more » « less
  2. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models. 
    more » « less
  3. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models. 
    more » « less
  4. Abstract This “work in progress” paper describes a multiyear project to study the development of engineering identity in a chemical and biological engineering program at Montana State University. The project focuses on how engineering identity may be impacted by a series of interventions utilizing subject material in a senior-level capstone design course and has the senior capstone design students serve as peer-mentors to first- and second-year students. A more rapid development of an engineering identity by first- and second-year students is suspected to increase retention and persistence in this engineering program. Through a series of timed interventions scheduled to take place in the first and second year, which includes cohorts that will serve as negative controls (no intervention), we hope to ascertain the following: (1) the extent to which, relative to a control group, exposure to a peer mentor increases a students’ engineering identity development over time compared to those who do not receive peer mentoring and (2) if the quantity and/or timing of the peer interactions impact engineering identity development. While the project includes interventions for both first- and second-year students, this work in progress paper focuses on the experiences of first year freshman as a result of the interventions and their development of an engineering identity over the course of the semester. Early in the fall semester, freshman chemical engineering students enrolled in an introductory chemical engineering course and senior students in a capstone design course were administered a survey which contained a validated instrument to assess engineering identity. The first-year course has 107 students and the senior-level course has 92 students and approximately 50% of the students in both cohorts completed the survey. Mid-semester, after the first-year students were introduced to the concepts of process flow diagrams and material balances in their course, senior design student teams gave presentations about their capstone design projects in the introductory course. The presentations focused on the project goals, design process and highlighted the process flow diagrams. After the presentations, freshman and senior students attended small group dinners as part of a homework assignment wherein the senior students were directed to communicate information about their design projects as well as share their experiences in the chemical engineering program. Dinners occurred overall several days, with up to ten freshman and five seniors attending each event. Freshman students were encouraged to use this time to discover more about the major, inquire about future course work, and learn about ways to enrich their educational experience through extracurricular and co-curricular activities. Several weeks after the dinner experience, senior students returned to give additional presentations to the freshman students to focus on the environmental and societal impacts of their design projects. We report baseline engineering identity in this paper. 
    more » « less
  5. null (Ed.)
    The DeepLearningEpilepsyDetectionChallenge: design, implementation, andtestofanewcrowd-sourced AIchallengeecosystem Isabell Kiral*, Subhrajit Roy*, Todd Mummert*, Alan Braz*, Jason Tsay, Jianbin Tang, Umar Asif, Thomas Schaffter, Eren Mehmet, The IBM Epilepsy Consortium◊ , Joseph Picone, Iyad Obeid, Bruno De Assis Marques, Stefan Maetschke, Rania Khalaf†, Michal Rosen-Zvi† , Gustavo Stolovitzky† , Mahtab Mirmomeni† , Stefan Harrer† * These authors contributed equally to this work † Corresponding authors: rkhalaf@us.ibm.com, rosen@il.ibm.com, gustavo@us.ibm.com, mahtabm@au1.ibm.com, sharrer@au.ibm.com ◊ Members of the IBM Epilepsy Consortium are listed in the Acknowledgements section J. Picone and I. Obeid are with Temple University, USA. T. Schaffter is with Sage Bionetworks, USA. E. Mehmet is with the University of Illinois at Urbana-Champaign, USA. All other authors are with IBM Research in USA, Israel and Australia. Introduction This decade has seen an ever-growing number of scientific fields benefitting from the advances in machine learning technology and tooling. More recently, this trend reached the medical domain, with applications reaching from cancer diagnosis [1] to the development of brain-machine-interfaces [2]. While Kaggle has pioneered the crowd-sourcing of machine learning challenges to incentivise data scientists from around the world to advance algorithm and model design, the increasing complexity of problem statements demands of participants to be expert data scientists, deeply knowledgeable in at least one other scientific domain, and competent software engineers with access to large compute resources. People who match this description are few and far between, unfortunately leading to a shrinking pool of possible participants and a loss of experts dedicating their time to solving important problems. Participation is even further restricted in the context of any challenge run on confidential use cases or with sensitive data. Recently, we designed and ran a deep learning challenge to crowd-source the development of an automated labelling system for brain recordings, aiming to advance epilepsy research. A focus of this challenge, run internally in IBM, was the development of a platform that lowers the barrier of entry and therefore mitigates the risk of excluding interested parties from participating. The challenge: enabling wide participation With the goal to run a challenge that mobilises the largest possible pool of participants from IBM (global), we designed a use case around previous work in epileptic seizure prediction [3]. In this “Deep Learning Epilepsy Detection Challenge”, participants were asked to develop an automatic labelling system to reduce the time a clinician would need to diagnose patients with epilepsy. Labelled training and blind validation data for the challenge were generously provided by Temple University Hospital (TUH) [4]. TUH also devised a novel scoring metric for the detection of seizures that was used as basis for algorithm evaluation [5]. In order to provide an experience with a low barrier of entry, we designed a generalisable challenge platform under the following principles: 1. No participant should need to have in-depth knowledge of the specific domain. (i.e. no participant should need to be a neuroscientist or epileptologist.) 2. No participant should need to be an expert data scientist. 3. No participant should need more than basic programming knowledge. (i.e. no participant should need to learn how to process fringe data formats and stream data efficiently.) 4. No participant should need to provide their own computing resources. In addition to the above, our platform should further • guide participants through the entire process from sign-up to model submission, • facilitate collaboration, and • provide instant feedback to the participants through data visualisation and intermediate online leaderboards. The platform The architecture of the platform that was designed and developed is shown in Figure 1. The entire system consists of a number of interacting components. (1) A web portal serves as the entry point to challenge participation, providing challenge information, such as timelines and challenge rules, and scientific background. The portal also facilitated the formation of teams and provided participants with an intermediate leaderboard of submitted results and a final leaderboard at the end of the challenge. (2) IBM Watson Studio [6] is the umbrella term for a number of services offered by IBM. Upon creation of a user account through the web portal, an IBM Watson Studio account was automatically created for each participant that allowed users access to IBM's Data Science Experience (DSX), the analytics engine Watson Machine Learning (WML), and IBM's Cloud Object Storage (COS) [7], all of which will be described in more detail in further sections. (3) The user interface and starter kit were hosted on IBM's Data Science Experience platform (DSX) and formed the main component for designing and testing models during the challenge. DSX allows for real-time collaboration on shared notebooks between team members. A starter kit in the form of a Python notebook, supporting the popular deep learning libraries TensorFLow [8] and PyTorch [9], was provided to all teams to guide them through the challenge process. Upon instantiation, the starter kit loaded necessary python libraries and custom functions for the invisible integration with COS and WML. In dedicated spots in the notebook, participants could write custom pre-processing code, machine learning models, and post-processing algorithms. The starter kit provided instant feedback about participants' custom routines through data visualisations. Using the notebook only, teams were able to run the code on WML, making use of a compute cluster of IBM's resources. The starter kit also enabled submission of the final code to a data storage to which only the challenge team had access. (4) Watson Machine Learning provided access to shared compute resources (GPUs). Code was bundled up automatically in the starter kit and deployed to and run on WML. WML in turn had access to shared storage from which it requested recorded data and to which it stored the participant's code and trained models. (5) IBM's Cloud Object Storage held the data for this challenge. Using the starter kit, participants could investigate their results as well as data samples in order to better design custom algorithms. (6) Utility Functions were loaded into the starter kit at instantiation. This set of functions included code to pre-process data into a more common format, to optimise streaming through the use of the NutsFlow and NutsML libraries [10], and to provide seamless access to the all IBM services used. Not captured in the diagram is the final code evaluation, which was conducted in an automated way as soon as code was submitted though the starter kit, minimising the burden on the challenge organising team. Figure 1: High-level architecture of the challenge platform Measuring success The competitive phase of the "Deep Learning Epilepsy Detection Challenge" ran for 6 months. Twenty-five teams, with a total number of 87 scientists and software engineers from 14 global locations participated. All participants made use of the starter kit we provided and ran algorithms on IBM's infrastructure WML. Seven teams persisted until the end of the challenge and submitted final solutions. The best performing solutions reached seizure detection performances which allow to reduce hundred-fold the time eliptologists need to annotate continuous EEG recordings. Thus, we expect the developed algorithms to aid in the diagnosis of epilepsy by significantly shortening manual labelling time. Detailed results are currently in preparation for publication. Equally important to solving the scientific challenge, however, was to understand whether we managed to encourage participation from non-expert data scientists. Figure 2: Primary occupation as reported by challenge participants Out of the 40 participants for whom we have occupational information, 23 reported Data Science or AI as their main job description, 11 reported being a Software Engineer, and 2 people had expertise in Neuroscience. Figure 2 shows that participants had a variety of specialisations, including some that are in no way related to data science, software engineering, or neuroscience. No participant had deep knowledge and experience in data science, software engineering and neuroscience. Conclusion Given the growing complexity of data science problems and increasing dataset sizes, in order to solve these problems, it is imperative to enable collaboration between people with differences in expertise with a focus on inclusiveness and having a low barrier of entry. We designed, implemented, and tested a challenge platform to address exactly this. Using our platform, we ran a deep-learning challenge for epileptic seizure detection. 87 IBM employees from several business units including but not limited to IBM Research with a variety of skills, including sales and design, participated in this highly technical challenge. 
    more » « less