One aspect of developing correct code, code that functions as specified, is annotating loops with suitable invariants. Loop invariants are useful for human reasoning and are necessary for tool-assisted automated reasoning. Writing loop invariants can be a difficult task for all students, especially beginning software engineering students. In helping students learn to write adequate invariants, we need to understand not only what errors they make, but also why they make them. This poster discusses the use of a Web IDE backed by the RESOLVE verification engine to aid students in developing loop invariants and to collect performance data. In addition to collecting submitted invariant answers, students are asked to provide their steps or thought processes regarding how they arrived at their answers for each submission. The answers and reasons are then analyzed using a mixed-methods approach. Resulting categories of answers indicate that students are able to use formal method concepts with which they are already familiar, such as, pre and post-conditions as a starting place to develop adequate loop invariants. Additionally, some common trouble spots in learning to write invariants are identified. The results will be useful to guide classroom instruction and automated tutoring.
more »
« less
Tool-Aided Loop Invariant Development: Insights into Student Conceptions and Difficulties
To develop code that meets its specification and is verifiably correct, such as in a software engineering course, students must be able to understand formal contracts and annotate their code with assertions such as loop invariants. To assist in developing suitable instructor and automated tool interventions, this research aims to go beyond simple pre- and post-conditions and gain insight into student learning of loop invariants involving objects. As students develop suitable loop invariants for given code with the aid of an online system backed by a verification engine, each student attempt, either correct or incorrect, was collected and analyzed automatically, and catalogued using an iterative process to capture common difficulties. Students were also asked to explain their thought process in arriving at their answer for each submission. The collected explanations were analyzed manually and found to be useful to assess their level of understanding as well as to extract actionable information for instructors and automated tutoring systems. Qualitative conclusions include the impact of the medium.
more »
« less
- Award ID(s):
- 1914667
- PAR ID:
- 10294497
- Date Published:
- Journal Name:
- ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education
- Page Range / eLocation ID:
- 387 to 393
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Understanding the thought processes of students as they progress from initial (incorrect) answers toward correct answers is a challenge for instructors, both in this pandemic and beyond. This paper presents a general network visualization learning analytics system that helps instructors to view a sequence of answers input by students in a way that makes student learning progressions apparent. The system allows instructors to study individual and group learning at various levels of granularity. The paper illustrates how the visualization system is employed to analyze student responses collected through an intervention. The intervention is BeginToReason, an online tool that helps students learn and use symbolic reasoning-reasoning about code behavior through abstract values instead of concrete inputs. The specific focus is analysis of tool-collected student responses as they perform reasoning activities on code involving conditional statements. Student learning is analyzed using the visualization system and a post-test. Visual analytics highlights include instances where students producing one set of incorrect answers initially perform better than a different set and instances where student thought processes do not cluster well. Post-test data analysis provides a measure of student ability to apply what they have learned and their holistic understanding.more » « less
-
As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students' code and processes as they work. Even with automation, understanding students' progress, and more importantly, if students are making the 'right' progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student's adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time.more » « less
-
Merkle, Larry; Doyle, Maureen; Sheard, Judithe; Soh, Leen-Kiat; Dorn, Brian (Ed.)As enrollment in CS programs have risen, it has become increasingly difficult for teaching staff to provide timely and detailed guidance on student projects. To address this, instructors use automated assessment tools to evaluate students’ code and processes as they work. Even with automation, understanding students’ progress, and more importantly, if students are making the ‘right’ progress toward the solution is challenging at scale. To help students manage their time and learn good software engineering processes, instructors may create intermediate deadlines, or milestones, to support progress. However, student’s adherence to these processes is opaque and may hinder student success and instructional support. Better understanding of how students follow process guidance in practice is needed to identify the right assignment structures to support development of high-quality process skills. We use data collected from an automated assessment tool, to calculate a set of 15 progress indicators to investigate which types of progress are being made during four stages of two projects in a CS2 course. These stages are split up by milestones to help guide student activities. We show how looking at which progress indicators are triggered significantly more or less during each stage validates whether students are adhering to the goals of each milestone. We also find students trigger some progress indicators earlier on the second project suggesting improving processes over time.more » « less
-
Background and context. “Explain in Plain English” (EiPE) questions ask students to explain the high-level purpose of code, requiring them to understand the macrostructure of the program’s intent. A lot is known about techniques that experts use to comprehend code, but less is known about how we should teach novices to develop this capability. Objective. Identify techniques that can be taught to students to assist them in developing their ability to comprehend code and contribute to the body of knowledge of how novices develop their code comprehension skills. Method. We developed interventions that could be taught to novices motivated by previous research about how experts comprehend code: prompting students to identify beacons, identify the role of variables, tracing, and abstract tracing. We conducted think-aloud interviews of introductory programming students solving EiPE questions, varying which interventions each student was taught. Some participants were interviewed multiple times throughout the semester to observe any changes in behavior over time. Findings. Identifying beacons and the name of variable roles were rarely helpful, as they did not encourage students to integrate their understanding of that piece in relation to other lines of code. However, prompting students to explain each variable’s purpose helped them focus on useful subsets of the code, which helped manage cognitive load. Tracing was helpful when students incorrectly recognized common programming patterns or made mistakes comprehending syntax (text-surface). Prompting students to pick inputs that potentially contradicted their current understanding of the code was found to be a simple approach to them effectively selecting inputs to trace. Abstract tracing helped students see high-level, functional relationships between variables. In addition, we observed student spontaneously sketching algorithmic visualizations that similarly helped them see relationships between variables. Implications. Because students can get stuck at many points in the process of code comprehension, there seems to be no silver bullet technique that helps in every circumstance. Instead, effective instruction for code comprehension will likely involve teaching a collection of techniques. In addition to these techniques, meta-knowledge about when to apply each technique will need to be learned, but that is left for future research. At present, we recommend teaching a bottom-up, concrete-to-abstract approach.more » « less
An official website of the United States government

