skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Use of an Anti-Pattern in CS2: Sequential if Statements with Exclusive Conditions
How can we teach students to use more readable code structures? How common is it for students to choose less readable (but still functional) alternatives? We explore these questions for a specific anti-pattern: using sequential if statements when conditions are exclusive (rather than using else-if or else). We created and validated an automated detector to identify this anti-pattern in student's code. Running the detector on 1,764 homework submissions (from 270 students in a CS2 class on data structures and algorithms) showed that this anti-pattern was common and varied by assignment: across 12 assignments, 3% to 50% of submissions used sequential ifs for exclusive cases. However, using this anti-pattern did not preclude using else-ifs: across assignments, up to 34% of the submissions used both forms. Further, students used sequential if statements in surprising ways, such as checking a condition and then the negation of that condition, indicating a more novice level of understanding than expected for an intermediate course. Hand-inspection of the detector-flagged cases suggests that sequential ifs for exclusive cases may be a code smell that can indicate larger problems with logic and abstraction.  more » « less
Award ID(s):
1948519
PAR ID:
10423486
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the 54th ACM Technical Symposium on Computer Science Education
Volume:
1
Page Range / eLocation ID:
542 to 548
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Experts often use particular control flow structures to make their code easier to read and modify, such as using the logical operator AND to conjoin conditions rather than nesting separate if statements. Within Boolean expressions, experts take advantage of short-circuit evaluation by ordering their conditions to avoid errors (such as checking that an index is within the bounds of an array before examining the value at that index). How well do students understand these structures? We investigate students' use and understanding of conjoined versus separate conditions within a larger assessment of 125 undergraduate students at the end of their second- and third-semester CS courses (in algorithms & data structures and introductory software engineering). The assessment asked students to: write code where an edge case error could be avoided with short-circuit evaluation, revise their code with nudges towards expert structure, and answer comprehension questions involving code tracing. When writing, students frequently forgot to check for a key edge case. When that case was included, the check was often separated in its own if-statement rather than conjoined with the other conditions. This could indicate a stylistic choice or a belief that the check had to be separated for functionality. Notably, students who included all necessary conditions rarely exhibited the error of ordering them incorrectly. However, with code comprehension, students demonstrated significant misunderstandings about the effects of condition ordering. Students were more accurate on comprehension tasks with nested ifs than conjoined conditions, and this effect was most pronounced when the ordering of the conditions would lead to errors. When conditions were conjoined in a single expression, only 35% of students recognized that checking a value at an index before checking that the index was in bounds would lead to an error. However, 54% of students recognized the problem when the conditions were separated into individual if-statements. This demonstrates a subtlety in code execution that intermediate students may not have mastered and emphasizes the challenges in assessing students' understanding solely via the way they write code. 
    more » « less
  2. Since intermediate CS students can use a variety of control structures, why do their choices often not match experts' Students may not realize what choices expert prefer, find non-expert choices easier to read, or simply forget to write with expert structure. To disentangle these explanations, we surveyed 328 2nd and 3rd semester undergraduates, with tasks including writing short functions, selecting which structure was most readable or best styled, and comprehension questions. Questions focused on seven control structure topics that were important to instructors (e.g., factoring out repeated code between an if-block and its else). Students frequently wrote with non-expert structure, and, for five topics, at least 1/3 of students (48% - 71%) thought a non-expert structure was more readable than the expert one. However, students often made one choice when writing code, but preferred a different choice when reading it. Additionally, for more complex topics, students often failed to notice (or understand) differences in execution caused by changes in structure. Together, these results suggest that instruction and practice for choosing control structures should be context-specific, and that assessment focused only on code writing may miss underlying misunderstandings. 
    more » « less
  3. Would providing choice lead to improved learning with a tutor? We had conducted and reported a controlled study earlier, wherein, introductory programing students were given the choice of skipping the line-by-line feedback provided after each incorrect answer in a tutor on if/if-else statements. Contrary to expectations, the study found that the choice to skip feedback did not lead to greater learning. We tried to reproduce these results using two tutors on if/if-else and switch statements, and with a larger subject pool. We found that whereas choice did not lead to greater learning on if/if-else tutor in this reproducibility study either, it resulted in decreased learning on switch tutor. We hypothesize that skipping feedback is indeed detrimental to learning. But, inter-relationships among the concepts covered by a tutor and the transfer of learning facilitated by these relationships compensate for the negative effect of skipping line-by-line feedback. We also found contradictory results between the two studies which highlight the need for reproducibility studies in empirical research. 
    more » « less
  4. Abstract: How well do code-writing tasks measure students’ knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else’s code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else.We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted - even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else’s non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that a combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone. 
    more » « less
  5. Engineers must understand how to build, apply, and adapt various types of models in order to be successful. Throughout undergraduate engineering education, modeling is fundamental for many core concepts, though it is rarely explicitly taught. There are many benefits to explicitly teaching modeling, particularly in the first years of an engineering program. The research questions that drove this study are: (1) How do students’ solutions to a complex, open-ended problem (both written and coded solutions) develop over the course of multiple submissions? and (2) How do these developments compare across groups of students that did and did not participate in a course centered around modeling?. Students’ solutions to an open-ended problem across multiple sections of an introductory programming course were explored. These sections were all divided across two groups: (1) experimental group - these sections discussed and utilized mathematical and computational models explicitly throughout the course, and (2) comparison group - these sections focused on developing algorithms and writing code with a more traditional approach. All sections required students to complete a common open-ended problem that consisted of two versions of the problem (the first version with smaller data set and the other a larger data set). Each version had two submissions – (1) a mathematical model or algorithm (i.e. students’ written solution potentially with tables and figures) and (2) a computational model or program (i.e. students’ MATLAB code). The students’ solutions were graded by student graders after completing two required training sessions that consisted of assessing multiple sample student solutions using the rubrics to ensure consistency across grading. The resulting assessments of students’ works based on the rubrics were analyzed to identify patterns students’ submissions and comparisons across sections. The results identified differences existing in the mathematical and computational model development between students from the experimental and comparison groups. The students in the experimental group were able to better address the complexity of the problem. Most groups demonstrated similar levels and types of change across the submissions for the other dimensions related to the purpose of model components, addressing the users’ anticipated needs, and communicating their solutions. These findings help inform other researchers and instructors how to help students develop mathematical and computational modeling skills, especially in a programming course. This work is part of a larger NSF study about the impact of varying levels of modeling interventions related to different types of models on students’ awareness of different types of models and their applications, as well as their ability to apply and develop different types of models. 
    more » « less