Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eyetracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence less on that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor.However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects.
more »
« less
Gesture helps learners learn, but not merely by guiding their visual attention
Abstract Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesturedoallocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediatethe effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderatesthe impact of visual looking patterns on learning—following along with speechpredicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.
more »
« less
- Award ID(s):
- 1561405
- PAR ID:
- 10056664
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Developmental Science
- Volume:
- 21
- Issue:
- 6
- ISSN:
- 1363-755X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In this article, we present a live speech-driven, avatar-mediated, three-party telepresence system, through which three distant users, embodied as avatars in a shared 3D virtual world, can perform natural three-party telepresence that does not require tracking devices. Based on live speech input from three users, this system can real-time generate the corresponding conversational motions of all the avatars, including head motion, eye motion, lip movement, torso motion, and hand gesture. All motions are generated automatically at each user side based on live speech input, and a cloud server is utilized to transmit and synchronize motion and speech among different users. We conduct a formal user study to evaluate the usability and effectiveness of the system by comparing it with a well-known online virtual world, Second Life, and a widely-used online teleconferencing system, Skype. The user study results indicate our system can provide a measurably better telepresence user experience than the two widely-used methods.more » « less
-
Children rely on their approximate number system (ANS) to guess quantities from a young age. Studies have shown that older children displayed better ANS performance. However, previous research did not provide an explanation for this ANS improvement. We show that children’s development in ANS is primarily driven by improved attentional control and awareness of peripheral information. Children guess the number of dots on a computer screen while being eye-tracked in our experiment. The behavioral and eye-tracking results provide supporting evidence for our account. Our analysis shows that children estimate better under the longer display-time condition and more visual foveation, with the effect of visual foveation mediating that of time. It also shows that older children make fewer underestimations because they are better at directing their attention and gaze toward areas of interest, and they are also more aware of dots in their peripheral vision. Our finding suggests that the development of children’s ANS is significantly impacted by the development of children’s nonnumerical cognitive abilities.more » « less
-
null (Ed.)Changes in task demands can have delayed adverse impacts on performance. This phenomenon, known as the workload history effect, is especially of concern in dynamic work domains where operators manage fluctuating task demands. The existing workload history literature does not depict a consistent picture regarding how these effects manifest, prompting research to consider measures that are informative on the operator's process. One promising measure is visual attention patterns, due to its informativeness on various cognitive processes. To explore its ability to explain workload history effects, participants completed a task in an unmanned aerial vehicle command and control testbed where workload transitioned gradually and suddenly. The participants’ performance and visual attention patterns were studied over time to identify workload history effects. The eye-tracking analysis consisted of using a recently developed eye-tracking metric called coefficient K , as it indicates whether visual attention is more focal or ambient. The performance results found workload history effects, but it depended on the workload level, time elapsed, and performance measure. The eye-tracking analysis suggested performance suffered when focal attention was deployed during low workload, which was an unexpected finding. When synthesizing these results, they suggest unexpected visual attention patterns can impact performance immediately over time. Further research is needed; however, this work shows the value of including a real-time visual attention measure, such as coefficient K , as a means to understand how the operator manages varying task demands in complex work environments.more » « less
-
Abstract The current study utilized eye-tracking to investigate the effects of intersensory redundancy and language on infant visual attention and detection of a change in prosody in audiovisual speech. Twelve-month-old monolingual English-learning infants viewed either synchronous (redundant) or asynchronous (non-redundant) presentations of a woman speaking in native or non-native speech. Halfway through each trial, the speaker changed prosody from infant-directed speech (IDS) to adult-directed speech (ADS) or vice versa. Infants focused more on the mouth of the speaker on IDS trials compared to ADS trials regardless of language or intersensory redundancy. Additionally, infants demonstrated greater detection of prosody changes from IDS speech to ADS speech in native speech. Planned comparisons indicated that infants detected prosody changes across a broader range of conditions during redundant stimulus presentations. These findings shed light on the influence of language and prosody on infant attention and highlight the complexity of audiovisual speech processing in infancy.more » « less