What knowledge does learning programming require? Prior work has focused on theorizing program writing and problem solving skills. We examine program comprehension and propose a formal theory of program tracing knowledge based on control flow paths through an interpreter program's source code. Because novices cannot understand the interpreter's programming language notation, we transform it into causal relationships from code tokens to instructions to machine state changes. To teach this knowledge, we propose a comprehension-first pedagogy based on causal inference, by showing, explaining, and assessing each path by stepping through concrete examples within many example programs. To assess this pedagogy, we built PLTutor, a tutorial system with a fixed curriculum of example programs. We evaluate learning gains among self-selected CS1 students using a block randomized lab study comparing PLTutor with Codecademy, a writing tutorial. In our small study, we find some evidence of improved learning gains on the SCS1, with average learning gains of PLTutor 60% higher than Codecademy (gain of 3.89 vs. 2.42 out of 27 questions). These gains strongly predicted midterms (R2=.64) only for PLTutor participants, whose grades showed less variation and no failures.
more »
« less
Investigating the Impact of Using a Live Programming Environment in a CS1 Course
Novice programmers often struggle with code understanding and debugging. Live Programming environments visualize the runtime values of a program each time it is modified to provide immediate feedback, which help with tracing the program execution. This paper presents the use of a Live Programming tool in a CS1 course to better understand the impact of Live Programming on novices’ learning metrics and their perceptions of the tool. We conducted a within-subjects study at a large public university in a CS1 course in Python (N=237) where students completed tasks in a lab setting,
in some cases with a Live Programming environment, and in some cases without. Through post-lab surveys and open-ended feedback, we measured how well students understood the material and how students perceived the programming environment. To understand the impact of Live Programming, we compared the collected data for students who used Live Programming with the data for students who did not. We found that while learning outcomes were the same regardless of whether Live Programming was used or not, students who used the Live Programming tool completed some code tracing tasks faster. Furthermore, students liked the Live Programming environment more, and rated it as more helpful for their learning.
more »
« less
- Award ID(s):
- 2044473
- NSF-PAR ID:
- 10313401
- Date Published:
- Journal Name:
- In Proceedings of the 53rd ACM Technical Symposium on Computer Science Education
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Peer assessment, as a form of collaborative learning, can engage students in active learning and improve their learning gains. However, current teaching platforms and programming environments provide little support to integrate peer assessment for in-class programming exercises. We identified challenges in conducting such exercises and adopting peer assessment through formative interviews with instructors of introductory programming courses. To address these challenges, we introduce PuzzleMe, a tool to help Computer Science instructors to conduct engaging in-class programming exercises. PuzzleMe leverages peer assessment to support a collaboration model where students provide timely feedback on their peers' work. We propose two assessment techniques tailored to in-class programming exercises: live peer testing and live peer code review. Live peer testing can improve students' code robustness by allowing them to create and share lightweight tests with peers. Live peer code review can improve code understanding by intelligently grouping students to maximize meaningful code reviews. A two-week deployment study revealed that PuzzleMe encourages students to write useful test cases, identify code problems, correct misunderstandings, and learn a diverse set of problem-solving approaches from peers.more » « less
-
We propose and evaluate a lightweight strategy for tracing code that can be efficiently taught to novice programmers, building off of recent findings on "sketching" when tracing. This strategy helps novices apply the syntactic and semantic knowledge they are learning by encouraging line-by-line tracing and providing an external representation of memory for them to update. To evaluate the effect of teaching this strategy, we conducted a block-randomized experiment with 24 novices enrolled in a university-level CS1 course. We spent only 5-10 minutes introducing the strategy to the experimental condition. We then asked both conditions to think-aloud as they predicted the output of short programs. Students using this strategy scored on average 15% higher than students in the control group for the tracing problems used the study (p<0.05). Qualitative analysis of think-aloud and interview data showed that tracing systematically (line-by-line and "sketching" intermediate values) led to better performance and that the strategy scaffolded and encouraged systematic tracing. Students who learned the strategy also scored on average 7% higher on the course midterm. These findings suggest that in <1 hour and without computer-based tools, we can improve CS1 students' tracing abilities by explicitly teaching a strategy.more » « less
-
Explain in Plain English (EiPE) questions evaluate whether students can understand and explain the high-level purpose of code. We conducted a qualitative think-aloud study of introductory programming students solving EiPE questions. In this paper, we focus on how students use tracing (mental execution) to understand code in order to explain it. We found that, in some cases, tracing can be an effective strategy for novices to understand and explain code. Furthermore, we observed three problems that prevented tracing from being helpful, which are 1) not employing tracing when it could be helpful (some struggling students explained correctly after the interviewer suggested tracing the code), 2) tracing incorrectly due to misunderstandings of the programming language, and 3) tracing with a set of inputs that did not sufficiently expose the code’s behavior (upon interviewer suggesting inputs, students explained correctly). These results suggest that we should teach students to use tracing as a method for understanding code and teach them how to select appropriate inputs to trace.more » « less
-
programming concepts in programming assignments in a CS1 course. We seek to answer the following research questions: RQ1. How effectively can large language models identify knowledge components in a CS1 course from programming assignments? RQ2. Can large language models be used to extract program-level knowledge components, and how can the information be used to identify students’ misconceptions? Preliminary results demonstrated a high similarity between course-level knowledge components retrieved from a large language model and that of an expert-generated list.more » « less