Worked examples (solutions to typical programming problems presented as a source code in a certain
language and are used to explain the topics from a programming class) are among the most popular types
of learning content in programming classes. Most approaches and tools for presenting these examples to
students are based on line-by-line explanations of the example code. However, instructors rarely have
time to provide line-by-line explanations for a large number of examples typically used in a programming
class. In this paper, we explore and assess a human-AI collaboration approach to authoring worked
examples for Java programming. We introduce an authoring system for creating Java worked examples
that generates a starting version of code explanations and presents it to the instructor to edit if necessary.
We also present a study that assesses the quality of explanations created with this approach.
more »
« less
This content will become publicly available on February 26, 2025
Explaining Code Examples in Introductory Programming Courses: LLM vs Humans
Worked examples, which present an explained code for solving typical programming problems are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide explanations for many examples typically used in a programming class. In this paper, we assess the feasibility of using LLMs to generate code explanations for passive and active example exploration systems. To achieve this goal, we compare the code explanations generated by chatGPT with the explanations generated by both experts and students.
more »
« less
- Award ID(s):
- 2213789
- NSF-PAR ID:
- 10518252
- Publisher / Repository:
- AAAI
- Date Published:
- Journal Name:
- Workshop on AI for Education - Bridging Innovation and Responsibility at AAAI 2024
- Subject(s) / Keyword(s):
- Programming Worked Examples Code Explanations ChatGPT
- Format(s):
- Medium: X
- Location:
- Vancouver, Canada
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Worked examples are among the most popular types of learning content in programming classes. However, instructors rarely have time to provide line-by-line explanations for a large number of examples typically used in a programming class. In this paper, we explore and assess a human-AI collaboration approach to authoring worked examples for Java programming. We introduce an authoring system for creating Java worked examples that generate a starting version of code explanations and presents it to the instructor to edit if necessary. We also present a study that assesses the quality of explanations created with this approach.more » « less
-
This paper presents a comparison of two instructional strategies meant to help learners better comprehend code and learn programming concepts: reading code examples annotated with expert explanation (worked-out examples) versus scaffolded self-explanation of code examples using an automated system (Intelligent Tutoring System). A randomized controlled trial study was conducted with 90 university students who were assigned to either the control group (reading worked-out examples, a passive strategy) or the experimental group where participants were asked to self-explain and received help, if needed, in the form of questions from the tutoring system( scaffolded self-explanation, an interactive strategy). We found that students with low prior knowledge in the experimental condition had significantly higher learning gains than students with high prior knowledge. However, in the control condition, this distinction in learning outcomes based on prior knowledge was not observed. We also analyzed the effect of self-efficacy on learning gains and the nature of self-explanation. Low self-efficacy students learn almost twice as much in the interactive condition versus the passive condition although the difference was not significant probably because of low sample size. We also found that high self-efficacy students tend to provide more relational explanations whereas low self-efficacy students provide more multi-structural or line-by-line explanations.more » « less
-
This paper systematically explores how Large Language Models (LLMs) generate explanations of code examples of the type used in intro-to-programming courses. As we show, the nature of code explanations generated by LLMs varies considerably based on the wording of the prompt, the target code examples being explained, the programming language, the temperature parameter, and the version of the LLM. Nevertheless, they are consistent in two major respects for Java and Python: the readability level, which hovers around 7-8 grade, and lexical density, i.e., the relative size of the meaninful words with respect to the total explanation size. Furthermore, the explanations score very high in correctness but less on three other metrics: completeness, conciseness, and contextualization.more » « less
-
Roman Bartak and Fazel Keshtkar and Michael Franklin (Ed.)This paper presents a novel method to automatically assess self-explanations generated by students during code comprehension activities. The self-explanations are produced in the context of an online learning environment that asks students to freely explain Java code examples line-by-line. We explored a number of models consisting of textual features in conjunction with machine learning algorithms such as Support Vector Regression (SVR), Decision Trees (DT), and Random Forests (RF). Support Vector Regression (SVR) performed best having a correlation score with human judgments of 0.7088. The best model used a combination of features such as semantic measures obtained using a Sentence BERT pre-trained model and from previously developed semantic algorithms used in a state-of-the-art intelligent tutoring system.more » « less