skip to main content


Title: Prompting for Free Self-Explanations Promotes Better Code Comprehension
We present in this paper a summary analysis of log files collected during an experiment designed to test the hypothesis that prompting for free self-explanations leads to better comprehension of computer code examples. Indeed, the results indicate that students who were prompted to self-explain while trying to understand code examples performed significantly better at predicting the correct output of the examples than students who were just prompted to read the code examples and predict their output.  more » « less
Award ID(s):
1822816
NSF-PAR ID:
10290873
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of The 5th Educational Data Mining in Computer Science Education (CSEDM) Workshop.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Confrustion, a mix of confusion and frustration sometimes experienced while grappling with instructional materials, is not necessarily detrimental to learning. Prior research has shown that studying erroneous examples can increase students’ experiences of confrustion, while at the same time helping them learn and overcome their misconceptions. In the study reported in this paper, we examined students’ knowledge and misconceptions about decimal numbers before and after they interacted with an intelligent tutoring system presenting either erroneous examples targeting misconceptions (erroneous example condition) or practice problems targeting the same misconceptions (problem-solving condition). While students in both conditions significantly improved their performance from pretest to posttest, students in the problem-solving condition improved significantly more and experienced significantly less confrustion. When controlling for confrustion levels, there were no differences in performance. This study is interesting in that, unlike prior studies, the higher confrustion that resulted from studying erroneous examples was not associated with better learning outcomes; instead, it was associated with poorer learning. We propose several possible explanations for this different outcome and hypothesize that revisions to the explanation prompts to make them more expert-like may have also made them – and the erroneous examples that they targeted – less understandable and less effective. Whether prompted self-explanation options should be modeled after the shorter, less precise language students tend to use or the longer, more precise language of experts is an open question, and an important one both for understanding the mechanisms of self-explanation and for designing self-explanation options deployed in instructional materials. 
    more » « less
  2. Confrustion, a mix of confusion and frustration sometimes experienced while grappling with instructional materials, is not necessarily detrimental to learning. Prior research has shown that studying erroneous examples can increase students’ experiences of confrustion, while at the same time helping them learn and overcome their misconceptions. In the study reported in this paper, we examined students’ knowledge and misconceptions about decimal numbers before and after they interacted with an intelligent tutoring system presenting either erroneous examples targeting misconceptions (erroneous example condition) or practice problems targeting the same misconceptions (problem-solving condition). While students in both conditions significantly improved their performance from pretest to posttest, students in the problem-solving condition improved significantly more and experienced significantly less confrustion. When controlling for confrustion levels, there were no differences in performance. This study is interesting in that, unlike prior studies, the higher confrustion that resulted from studying erroneous examples was not associated with better learning outcomes; instead, it was associated with poorer learning. We propose several possible explanations for this different outcome and hypothesize that revisions to the explanation prompts to make them more expert-like may have also made them – and the erroneous examples that they targeted – less understandable and less effective. Whether prompted self-explanation options should be modeled after the shorter, less precise language students tend to use or the longer, more precise language of experts is an open question, and an important one both for understanding the mechanisms of self-explanation and for designing self-explanation options deployed in instructional materials. 
    more » « less
  3. Abstract: How well do code-writing tasks measure students’ knowledge of programming patterns and anti-patterns? How can we assess this knowledge more accurately? To explore these questions, we surveyed 328 intermediate CS students and measured their performance on different types of tasks, including writing code, editing someone else’s code, and, if applicable, revising their own alternatively-structured code. Our tasks targeted returning a Boolean expression and using unique code within an if and else.We found that code writing sometimes under-estimated student knowledge. For tasks targeting returning a Boolean expression, over 55% of students who initially wrote with non-expert structure successfully revised to expert structure when prompted - even though the prompt did not include guidance on how to improve their code. Further, over 25% of students who initially wrote non-expert code could properly edit someone else’s non-expert code to expert structure. These results show that non-expert code is not a reliable indicator of deep misconceptions about the structure of expert code. Finally, although code writing is correlated with code editing, the relationship is weak: a model with code writing as the sole predictor of code editing explains less than 15% of the variance. Model accuracy improves when we include additional predictors that reflect other facets of knowledge, namely the identification of expert code and selection of expert code as more readable than non-expert code. Together, these results indicate that a combination of code writing, revising, editing, and identification tasks can provide a more accurate assessment of student knowledge of programming patterns than code writing alone. 
    more » « less
  4. Research Question: The vast amount of material available on-line has prompted researchers to understand how undergraduate students sort and select, or evaluate, the results that emerge from their searches. Since students depend on on-line material to facilitate their learning of course material, understanding the basis of their process is imperative to how institutions develop more equitable and far-reaching strategies for student success. Given this context, this study asks the following question: When students are faced with several choices that emerge from their on-line search, what are the criteria used to evaluate and select resources that support learning of course content? Methods: To answer the research question, we drew on interview data from 12 students enrolled in a community college district, who offered insights on how they evaluated on-line resources for their science, technology, engineering, and mathematics (STEM) courses. Results: We find that trust and utility were the prominent criteria by which on-line resources were evaluated. Students were skeptical of the accuracy of content in a given resource and used several dimensions of trust to direct their assessment. Students also evaluated with purpose, to search for and sort resources that reflected their goals and preferred conditions for engagement, or what we consider as utility. Conclusion: Understanding how students sort and evaluate on-line resources offers insights into a learning environment increasingly defined by the internet and informs how institutions and instructors might better incorporate these resources into their curriculum and academic supports. Our findings reveal implications for institutional leadership, faculty, and student services.

     
    more » « less
  5. Stereotypes about men being better than women at mathematics appear to influence female students’ interest and performance in mathematics. Given the potential motivational benefits of digital learning games, it is possible that games could help to reduce math anxiety, increase self-efficacy, and lead to better learning outcomes for female students. We are exploring this possibility in our work with Decimal Point, a digital learning game that scaffolds practice with decimal operations for 5th and 6th grade students. In several studies with various versions of the game, involving over 800 students across multiple years, we have consistently uncovered a learning advantage for female students with the game. In our most recent investigation of this gender effect, we decided to experiment with a central feature of the game: its use of prompted self-explanation to support student learning. Prior research has suggested that female students might benefit more from self-explanation than male students. In the new study, involving 214 middle school students, we compared three versions of self-explanation in the game – menu-based, scaffolded, and focused – each presenting students with a different type of prompted self-explanation after they solved problems in the game. We found that the focused approach led to more learning across all students than the menu-based approach, a result reported in an earlier paper. In the additional results reported in this paper, we again uncovered the gender effect – female students learned more from the game than male students, regardless of the version of self-explanation – and also found a trend in which female students made fewer self-explanation errors, suggesting they may have been more deliberate and thoughtful in their self-explanations. This self-explanation finding is a possible key to further investigation into how and why we see the gender effect in Decimal Point. 
    more » « less