Abstract Undergraduates (n = 132) learned about the human respiratory system and then taught what they learned by explaining aloud on video. Following a 2 × 2 design, students either generated their own words or visuals on paper while explaining aloud, or they viewed instructor‐provided words or visuals while explaining aloud. One week after teaching, students completed explanation, drawing, and transfer tests. Teaching with provided or generated visualizations resulted in significantly higher transfer test performance than teaching with provided or generated words. Furthermore, teaching with provided visuals led to significantly higher drawing test performance than teaching with generated visuals. Finally, the number of elaborations in students' explanations during teaching did not significantly differ across groups but was significantly associated with subsequent explanation and transfer test performance. Overall, the findings partially support the hypothesis that visuals facilitate learning by explaining, yet the benefits appeared stronger for instructor‐provided visuals than learner‐generated drawings.
more »
« less
Third Graders' Interpretations of Subtraction Worked Examples: Matching Number Sentences and Visuals [Roundtable Session]
This study investigated 37 third graders’ explanations of subtraction worked examples shown in number sentence or visual form (ten frame or number line) and their justifications for which visual and numerical worked examples corresponded to the same subtraction strategy. Results showed that third graders gave more detailed explanations in number sentence form than in visual form; whereas, they had higher accuracy in matching number sentences to visuals than vice versa. When matching, they were more likely to reason sufficiently when identifying processes represented in the worked examples as opposed to reasoning about the order of the numbers. When using worked examples, teachers should make use of visuals to help students focus on how the visuals represent the operations.
more »
« less
- Award ID(s):
- 1759254
- PAR ID:
- 10168731
- Date Published:
- Journal Name:
- Proceedings of the 2020 AERA Annual Meeting [Conference Canceled]
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper presents a comparison of two instructional strategies meant to help learners better comprehend code and learn programming concepts: reading code examples annotated with expert explanation (worked-out examples) versus scaffolded self-explanation of code examples using an automated system (Intelligent Tutoring System). A randomized controlled trial study was conducted with 90 university students who were assigned to either the control group (reading worked-out examples, a passive strategy) or the experimental group where participants were asked to self-explain and received help, if needed, in the form of questions from the tutoring system( scaffolded self-explanation, an interactive strategy). We found that students with low prior knowledge in the experimental condition had significantly higher learning gains than students with high prior knowledge. However, in the control condition, this distinction in learning outcomes based on prior knowledge was not observed. We also analyzed the effect of self-efficacy on learning gains and the nature of self-explanation. Low self-efficacy students learn almost twice as much in the interactive condition versus the passive condition although the difference was not significant probably because of low sample size. We also found that high self-efficacy students tend to provide more relational explanations whereas low self-efficacy students provide more multi-structural or line-by-line explanations.more » « less
-
Worked examples are among the most popular types of learning content in programming classes. However, instructors rarely have time to provide line-by-line explanations for a large number of examples typically used in a programming class. In this paper, we explore and assess a human-AI collaboration approach to authoring worked examples for Java programming. We introduce an authoring system for creating Java worked examples that generate a starting version of code explanations and presents it to the instructor to edit if necessary. We also present a study that assesses the quality of explanations created with this approach.more » « less
-
Worked examples (solutions to typical programming problems presented as a source code in a certain language and are used to explain the topics from a programming class) are among the most popular types of learning content in programming classes. Most approaches and tools for presenting these examples to students are based on line-by-line explanations of the example code. However, instructors rarely have time to provide line-by-line explanations for a large number of examples typically used in a programming class. In this paper, we explore and assess a human-AI collaboration approach to authoring worked examples for Java programming. We introduce an authoring system for creating Java worked examples that generates a starting version of code explanations and presents it to the instructor to edit if necessary. We also present a study that assesses the quality of explanations created with this approach.more » « less
-
Abstract This study explored how different formats of instructional visuals affect the accuracy of students' metacognitive judgments. Undergraduates (n = 133) studied a series of five biology texts and made judgments of learning. Students were assigned randomly to study the texts only (text only), study the texts with provided visuals (provided visuals group), study the texts and generate their own visuals (learner‐generated visuals group), or study the texts and observe animations of instructor‐generated visuals (instructor‐generated visuals group). After studying the texts and making judgments of learning, all students completed multiple‐choice comprehension tests on each text. The learner‐generated and instructor‐generated visuals groups exhibited significantly higher relative judgment accuracy than the text only and provided visuals groups, though this effect was relatively small. The learner‐generated visuals group also required more study time and was more likely to report the use of visual cues when making their judgments of learning.more » « less
An official website of the United States government

