skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Loop Invariants
One aspect of developing correct code, code that functions as specified, is annotating loops with suitable invariants. Loop invariants are useful for human reasoning and are necessary for tool-assisted automated reasoning. Writing loop invariants can be a difficult task for all students, especially beginning software engineering students. In helping students learn to write adequate invariants, we need to understand not only what errors they make, but also why they make them. This poster discusses the use of a Web IDE backed by the RESOLVE verification engine to aid students in developing loop invariants and to collect performance data. In addition to collecting submitted invariant answers, students are asked to provide their steps or thought processes regarding how they arrived at their answers for each submission. The answers and reasons are then analyzed using a mixed-methods approach. Resulting categories of answers indicate that students are able to use formal method concepts with which they are already familiar, such as, pre and post-conditions as a starting place to develop adequate loop invariants. Additionally, some common trouble spots in learning to write invariants are identified. The results will be useful to guide classroom instruction and automated tutoring.  more » « less
Award ID(s):
1914667 1915088
PAR ID:
10195158
Author(s) / Creator(s):
Date Published:
Journal Name:
SIGCSE '20: Proceedings of the 51st ACM Technical Symposium on Computer Science Education
Page Range / eLocation ID:
1426 to 1426
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    To develop code that meets its specification and is verifiably correct, such as in a software engineering course, students must be able to understand formal contracts and annotate their code with assertions such as loop invariants. To assist in developing suitable instructor and automated tool interventions, this research aims to go beyond simple pre- and post-conditions and gain insight into student learning of loop invariants involving objects. As students develop suitable loop invariants for given code with the aid of an online system backed by a verification engine, each student attempt, either correct or incorrect, was collected and analyzed automatically, and catalogued using an iterative process to capture common difficulties. Students were also asked to explain their thought process in arriving at their answer for each submission. The collected explanations were analyzed manually and found to be useful to assess their level of understanding as well as to extract actionable information for instructors and automated tutoring systems. Qualitative conclusions include the impact of the medium. 
    more » « less
  2. Understanding the thought processes of students as they progress from initial (incorrect) answers toward correct answers is a challenge for instructors, both in this pandemic and beyond. This paper presents a general network visualization learning analytics system that helps instructors to view a sequence of answers input by students in a way that makes student learning progressions apparent. The system allows instructors to study individual and group learning at various levels of granularity. The paper illustrates how the visualization system is employed to analyze student responses collected through an intervention. The intervention is BeginToReason, an online tool that helps students learn and use symbolic reasoning-reasoning about code behavior through abstract values instead of concrete inputs. The specific focus is analysis of tool-collected student responses as they perform reasoning activities on code involving conditional statements. Student learning is analyzed using the visualization system and a post-test. Visual analytics highlights include instances where students producing one set of incorrect answers initially perform better than a different set and instances where student thought processes do not cluster well. Post-test data analysis provides a measure of student ability to apply what they have learned and their holistic understanding. 
    more » « less
  3. [This paper is part of the Focused Collection in Investigating and Improving Quantum Education through Research.] We discuss an investigation of student sensemaking and reasoning in the context of degenerate perturbation theory (DPT) in quantum mechanics. We find that advanced undergraduate and graduate students in quantum physics courses often struggled with expertlike sensemaking and reasoning to solve DPT problems. The sensemaking and reasoning were particularly challenging for students as they tried to integrate physical and mathematical concepts to solve DPT problems. Their sensemaking showed local coherence but lacked global consistency with different knowledge resources getting activated in different problem-solving tasks even if the same concepts were applicable. Depending upon the issues involved in the DPT problems, students were sometimes stuck in the “physics mode” or “math mode” and found it challenging to coordinate and integrate the physics and mathematics appropriately to solve quantum mechanics problems involving DPT. Their sensemaking shows the use of various reasoning primitives. It also shows that some advanced students struggled with self-monitoring and checking their answers to make sure they were consistent across different problems. Some also relied on memorized information, invoked authority, and did not make appropriate connections between their DPT problem solutions and the outcomes of experiments. Advanced students in quantum mechanics often displayed analogous patterns of challenges in sensemaking and reasoning as those that have been found in introductory physics. Student sensemaking and reasoning show that these advanced students are still developing expertise in this novel quantum physics domain as they learn to integrate physical and mathematical concepts. Published by the American Physical Society2024 
    more » « less
  4. The process of synthesizing solutions for mathematical problems is cognitively complex. Students formulate and implement strate- gies to solve mathematical problems, develop solutions, and make connections between their learned concepts as they apply their reasoning skills to solve such problems. The gaps in student knowl- edge or shallowly-learned concepts may cause students to guess at answers or otherwise apply the wrong approach, resulting in errors in their solutions. Despite the complexity of the synthesis process in mathematics learning, teachers’ knowledge and ability to anticipate areas of potential difficulty is essential and correlated with student learning outcomes. Preemptively identifying the common miscon- ceptions in students that result in subsequent incorrect attempts can be arduous and unreliable, even for experienced teachers. This pa- per aims to help teachers identify the subsequent incorrect attempts that commonly occur when students are working on math problems such that they can address the underlying gaps in knowledge and common misconceptions through feedback. We report on a longi- tudinal analysis of historical data, from a computer-based learning platform, exploring the incorrect answers in the prior school years (’15-’20) that establish the commonality of wrong answers on two Open Educational Resources (OER) curricula–Illustrative Math (IM) and EngageNY (ENY) for grades 6, 7, and 8. We observe that incor- rect answers are pervasive across 5 academic years despite changes in underlying student and teacher population. Building on our find- ings regarding the Common Wrong Answers (CWAs), we report on goals and task analysis that we leveraged in designing and develop- ing a crowdsourcing platform for teachers to write Common Wrong Answer Feedback (CWAF) aimed are remediating the underlying cause of the CWAs. Finally, we report on an in vivo study by analyz- ing the effectiveness of CWAFs using two approaches; first, we use next-problem-correctness as a dependent measure after receiving CWAF in an intent-to-treat second, using next-attempt correctness as a dependent measure after receiving CWAF in a treated analysis. With the rise in popularity and usage of computer-based learning platforms, this paper explores the potential benefits of scalability in identifying CWAs and the subsequent usage of crowd-sourced CWAFs in enhancing the student learning experience through re- mediation. 
    more » « less
  5. Martin Fred; Norouzi, Narges; Rosenthal, Stephanie (Ed.)
    This paper examines the use of LLMs to support the grading and explanation of short-answer formative assessments in K12 science topics. While significant work has been done on programmatically scoring well-structured student assessments in math and computer science, many of these approaches produce a numerical score and stop short of providing teachers and students with explanations for the assigned scores. In this paper, we investigate few-shot, in-context learning with chain-of-thought reasoning and active learning using GPT-4 for automated assessment of students’ answers in a middle school Earth Science curriculum. Our findings from this human-in-the-loop approach demonstrate success in scoring formative assessment responses and in providing meaningful explanations for the assigned score. We then perform a systematic analysis of the advantages and limitations of our approach. This research provides insight into how we can use human-in-the-loop methods for the continual improvement of automated grading for open-ended science assessments. 
    more » « less