Abstract This study explored how different formats of instructional visuals affect the accuracy of students' metacognitive judgments. Undergraduates (n = 133) studied a series of five biology texts and made judgments of learning. Students were assigned randomly to study the texts only (text only), study the texts with provided visuals (provided visuals group), study the texts and generate their own visuals (learner‐generated visuals group), or study the texts and observe animations of instructor‐generated visuals (instructor‐generated visuals group). After studying the texts and making judgments of learning, all students completed multiple‐choice comprehension tests on each text. The learner‐generated and instructor‐generated visuals groups exhibited significantly higher relative judgment accuracy than the text only and provided visuals groups, though this effect was relatively small. The learner‐generated visuals group also required more study time and was more likely to report the use of visual cues when making their judgments of learning.
more »
« less
Fostering knowledge building in learning by teaching: A test of the drawing‐facilitates‐explaining hypothesis
Abstract Undergraduates (n = 132) learned about the human respiratory system and then taught what they learned by explaining aloud on video. Following a 2 × 2 design, students either generated their own words or visuals on paper while explaining aloud, or they viewed instructor‐provided words or visuals while explaining aloud. One week after teaching, students completed explanation, drawing, and transfer tests. Teaching with provided or generated visualizations resulted in significantly higher transfer test performance than teaching with provided or generated words. Furthermore, teaching with provided visuals led to significantly higher drawing test performance than teaching with generated visuals. Finally, the number of elaborations in students' explanations during teaching did not significantly differ across groups but was significantly associated with subsequent explanation and transfer test performance. Overall, the findings partially support the hypothesis that visuals facilitate learning by explaining, yet the benefits appeared stronger for instructor‐provided visuals than learner‐generated drawings.
more »
« less
- Award ID(s):
- 2055117
- PAR ID:
- 10442033
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Applied Cognitive Psychology
- Volume:
- 37
- Issue:
- 5
- ISSN:
- 0888-4080
- Page Range / eLocation ID:
- p. 1124-1138
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Carvalho, Paulo F. (Ed.)Evidence-based teaching practices are associated with improved student academic performance. However, these practices encompass a wide range of activities and determining which type, intensity or duration of activity is effective at improving student exam performance has been elusive. To address this shortcoming, we used a previously validated classroom observation tool, Practical Observation Rubric to Assess Active Learning (PORTAAL) to measure the presence, intensity, and duration of evidence-based teaching practices in a retrospective study of upper and lower division biology courses. We determined the cognitive challenge of exams by categorizing all exam questions obtained from the courses using Bloom’s Taxonomy of Cognitive Domains. We used structural equation modeling to correlate the PORTAAL practices with exam performance while controlling for cognitive challenge of exams, students’ GPA at start of the term, and students’ demographic factors. Small group activities, randomly calling on students or groups to answer questions, explaining alternative answers, and total time students were thinking, working with others or answering questions had positive correlations with exam performance. On exams at higher Bloom’s levels, students explaining the reasoning underlying their answers, students working alone, and receiving positive feedback from the instructor also correlated with increased exam performance. Our study is the first to demonstrate a correlation between the intensity or duration of evidence-based PORTAAL practices and student exam performance while controlling for Bloom’s level of exams, as well as looking more specifically at which practices correlate with performance on exams at low and high Bloom’s levels. This level of detail will provide valuable insights for faculty as they prioritize changes to their teaching. As we found that multiple PORTAAL practices had a positive association with exam performance, it may be encouraging for instructors to realize that there are many ways to benefit students’ learning by incorporating these evidence-based teaching practices.more » « less
-
We propose and evaluate a lightweight strategy for tracing code that can be efficiently taught to novice programmers, building off of recent findings on "sketching" when tracing. This strategy helps novices apply the syntactic and semantic knowledge they are learning by encouraging line-by-line tracing and providing an external representation of memory for them to update. To evaluate the effect of teaching this strategy, we conducted a block-randomized experiment with 24 novices enrolled in a university-level CS1 course. We spent only 5-10 minutes introducing the strategy to the experimental condition. We then asked both conditions to think-aloud as they predicted the output of short programs. Students using this strategy scored on average 15% higher than students in the control group for the tracing problems used the study (p<0.05). Qualitative analysis of think-aloud and interview data showed that tracing systematically (line-by-line and "sketching" intermediate values) led to better performance and that the strategy scaffolded and encouraged systematic tracing. Students who learned the strategy also scored on average 7% higher on the course midterm. These findings suggest that in <1 hour and without computer-based tools, we can improve CS1 students' tracing abilities by explicitly teaching a strategy.more » « less
-
Abstract Prior research suggests most students do not glean valid cues from provided visuals, resulting in reduced metacomprehension accuracy. Across 4 experiments, we explored how the presence of instructional visuals affects students’ metacomprehension accuracy and cue-use for different types of metacognitive judgments. Undergraduates read texts on biology (Study 1a and b) or chemistry (Study 2 and 3) topics, made various judgments (test, explain, and draw) for each text, and completed comprehension tests. Students were randomly assigned to receive only texts (text-only condition) or texts with instructional visualizations (text-and-image condition). In Studies 1b, 2 and 3, students also reported the cues they used to make each judgment. Across the set of studies, instructional visualizations harmed relative metacomprehension accuracy. In Studies 1a and 2, this was especially the case when students were asked to judge how well they felt they could draw the processes described in the text. But in Study 3, this was especially the case when students were asked to judge how well they would do on a set of comprehension tests. In Studies 2 and 3, students who reported basing their judgments on representation-based cues demonstrated more accurate relative accuracy than students who reported using heuristic based cues. Further, across these studies, students reported using visual cues to make their draw judgments, but not their test or explain judgments. Taken together, these results indicate that instructional visualizations can hinder metacognitive judgment accuracy, particularly by influencing the types of cues students use to make judgments of their ability to draw key concepts.more » « less
-
Abstract The positivity principle states that people learn better from instructors who display positive emotions rather than negative emotions. In two experiments, students viewed a short video lecture on a statistics topic in which an instructor stood next to a series of slides as she lectured and then they took either an immediate test (Experiment 1) or a delayed test (Experiment 2). In a between-subjects design, students saw an instructor who used her voice, body movement, gesture, facial expression, and eye gaze to display one of four emotions while lecturing: happy (positive/active), content (positive/passive), frustrated (negative/active), or bored (negative/passive). First, learners were able to recognize the emotional tone of the instructor in an instructional video lecture, particularly by more strongly rating a positive instructor as displaying positive emotions and a negative instructor as displaying negative emotions (in Experiments 1 and 2). Second, concerning building a social connection during learning, learners rated a positive instructor as more likely to facilitate learning, more credible, and more engaging than a negative instructor (in Experiments 1 and 2). Third, concerning cognitive engagement during learning, learners reported paying more attention during learning for a positive instructor than a negative instructor (in Experiments 1 and 2). Finally, concerning learning outcome, learners who had a positive instructor scored higher than learners who had a negative instructor on a delayed posttest (Experiment 2) but not an immediate posttest (Experiment 1). Overall, there is evidence for the positivity principle and the cognitive-affective model of e-learning from which it is derived.more » « less
An official website of the United States government
