skip to main content


Title: A Comparative Study of Free Self-Explanations and Socratic Tutoring Explanations for Source Code Comprehension
We present in this paper the results of a randomized control trial experiment that compared the effectiveness of two instructional strategies that scaffold learners' code comprehension processes: eliciting Free Self-Explanation and a Socratic Method. Code comprehension, i.e., understanding source code, is a critical skill for both learners and professionals. Improving learners' code comprehension skills should result in improved learning which in turn should help with retention in intro-to-programming courses which are notorious for suffering from very high attrition rates due to the complexity of programming topics. To this end, the reported experiment is meant to explore the effectiveness of various strategies to elicit self-explanation as a way to improve comprehension and learning during complex code comprehension and learning activities in intro-to-programming courses. The experiment showed pre-/post-test learning gains of 30% (M = 0.30, SD = 0.47) for the Free Self-Explanation condition and learning gains of 59% (M = 0.59,SD = 0.39) for the Socratic method. Furthermore, we investigated the behavior of the two strategies as a function of students' prior knowledge which was measured using learners' pretest score. For the Free Self-Explanation condition, there was no significant difference in mean learning gains for low vs. high knowledge students. The magnitude of the difference in performance (mean difference= 0.02,95% CI: -0.34 to 0.39) was very small (eta squared = 0.006). Likewise, the Socratic method showed no significant difference in mean learning gains between low vs. high performing students. The magnitude of the performance difference (mean difference =-0.24,95% CI: -0.534 to 0.03) was large (eta squared = 0.10). These findings suggest that eliciting self-explanations can be used as an effective strategy and that guided self-explanations as in the Socratic method condition is more effective at inducing learning gains.  more » « less
Award ID(s):
1822752 1918751
NSF-PAR ID:
10301173
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 52nd ACM Technical Symposium on Computer Science Education
Page Range / eLocation ID:
219 to 225
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Self-explanations could increase student’s comprehension in complex domains; however, it works most efficiently with a human tutor who could provide corrections and scaffolding. In this paper, we present our attempt to scale up the use of self-explanations in learning programming by delegating assessment and scaffolding of explanations to an intelligent tutor. To assess our approach, we performed a randomized control trial experiment that measured the impact of automatic assessment and scaffolding of self-explanations on code comprehension and learning. The study results indicate that low-prior knowledge students in the experimental condition learn more compared to high-prior knowledge in the same condition but such difference is not observed in a similar grouping of students based on prior knowledge in the control condition. 
    more » « less
  2. Wang, N. ; Rebolledo-Mendez ; G., Dimitrova ; V., Matsuda ; Santos, O.C. (Ed.)
    Self-explanations could increase student’s comprehension in complex domains; however, it works most efficiently with a human tutor who could provide corrections and scaffolding. In this paper, we present our attempt to scale up the use of self-explanations in learning programming by delegating assessment and scaffolding of explanations to an intelligent tutor. To assess our approach, we performed a randomized control trial experiment that measured the impact of automatic assessment and scaffolding of self-explanations on code comprehension and learning. The study results indicate that low-prior knowledge students in the experimental condition learn more compared to high-prior knowledge in the same condition but such difference is not observed in a similar grouping of students based on prior knowledge in the control condition. 
    more » « less
  3. N. Wang, G. Rebolledo-Mendez (Ed.)
    Self-explanations could increase student’s comprehension in complex domains; however, it works most efficiently with a human tutor who could provide corrections and scaffolding. In this paper, we present our attempt to scale up the use of self-explanations in learning program- ming by delegating assessment and scaffolding of explanations to an intel- ligent tutor. To assess our approach, we performed a randomized control trial experiment that measured the impact of automatic assessment and scaffolding of self-explanations on code comprehension and learning. The study results indicate that low-prior knowledge students in the experi- mental condition learn more compared to high-prior knowledge in the same condition but such difference is not observed in a similar grouping of students based on prior knowledge in the control condition. 
    more » « less
  4. Crossley, Scott ; Popescu, Elvira (Ed.)
    We present here a novel instructional resource, called DeepCode, to support deep code comprehension and learning in intro-to-programming courses (CS1 and CS2). DeepCode is a set of instructional code examples which we call a codeset and which was annotated by our team with comments (e.g., explaining the logical steps of the underlying problem being solved) and related instructional questions that can play the role of hints meant to help learners think about and articulate explanations of the code. While DeepCode was designed primarily to serve our larger efforts of developing an intelligent tutoring system (ITS) that fosters the monitoring, assessment, and development of code comprehension skills for students learning to program, the codeset can be used for other purposes such as assessment, problem-solving, and in various other learning activities such as studying worked-out code examples with explanations and code visualizations. We present here the underlying principles, theories, and frameworks behind our design process, the annotation guidelines, and summarize the resulting codeset of 98 annotated Java code examples which include 7,157 lines of code (including comments), 260 logical steps, 260 logical step details, 408 statement level comments, and 590 scaffolding questions. 
    more » « less
  5. Prompted self-explanation, in which learners are induced to explain how they have solved problems, is a powerful instructional technique. Self-explanation can be prompted within learning technology by asking learners to construct their own self-explanations or select explanations from a menu. The menu-based approach has led to the best learning outcomes in the relatively few cases it has been studied in the context of digital learning games, contrary to some self-explanation theory. In a classroom study of 214 5th and 6th graders, in which the students played a digital learning game, we compared three forms of prompted self-explanation: menu-based, scaffolded, and focused (i.e., open-ended text entry, but with a focused prompt). Students in the focused condition learned more than students in the menu-based condition at delayed posttest, with no other learning differences between the conditions. This suggests that focused self-explanations may be especially beneficial for retention and deeper knowledge. 
    more » « less