skip to main content


Title: Gesture helps learners learn, but not merely by guiding their visual attention
Abstract

Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesturedoallocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediatethe effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderatesthe impact of visual looking patterns on learning—following along with speechpredicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.

 
more » « less
Award ID(s):
1561405
NSF-PAR ID:
10056664
Author(s) / Creator(s):
 ;  ;  ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Developmental Science
Volume:
21
Issue:
6
ISSN:
1363-755X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Teaching a new concept with gestures – hand movements that accompany speech – facilitates learning above-and-beyond instruction through speech alone (e.g., Singer & Goldin-Meadow, 2005). However, the mechanisms underlying this phenomenon are still being explored. Here, we use eyetracking to explore one mechanism – gesture’s ability to direct visual attention. We examine how children allocate their visual attention during a mathematical equivalence less on that either contains gesture or does not. We show that gesture instruction improves posttest performance, and additionally that gesture does change how children visually attend to instruction: children look more to the problem being explained, and less to the instructor.However looking patterns alone cannot explain gesture’s effect, as posttest performance is not predicted by any of our looking-time measures. These findings suggest that gesture does guide visual attention, but that attention alone cannot account for its facilitative learning effects. 
    more » « less
  2. Abstract

    When asked to explain their solutions to a problem, children often gesture and, at times, these gestures convey information that is different from the information conveyed in speech. Children who produce these gesture‐speech “mismatches” on a particular task have been found to profit from instruction on that task. We have recently found that some children produce gesture‐speech mismatches when identifying numbers at the cusp of their knowledge, for example, a child incorrectly labels a set of two objects with the word “three” and simultaneously holds up two fingers. These mismatches differ from previously studied mismatches (where the information conveyed in gesture has the potential to be integrated with the information conveyed in speech) in that the gestured response contradicts the spoken response. Here, we ask whether these contradictory number mismatches predict which learners will profit from number‐word instruction. We used theGive‐a‐Numbertask to measure number knowledge in 47 children (Mage = 4.1 years,SD = 0.58), and used theWhat's on this Cardtask to assess whether children produced gesture‐speech mismatches above their knower level. Children who were early in their number learning trajectories (“one‐knowers” and “two‐knowers”) were then randomly assigned, within knower level, to one of two training conditions: a Counting condition in which children practiced counting objects; or an Enriched Number Talk condition containing counting, labeling set sizes, spatial alignment of neighboring sets, and comparison of these sets. Controlling for counting ability, we found that children were more likely to learn the meaning of new number words in the Enriched Number Talk condition than in the Counting condition, but only if they had produced gesture‐speech mismatches at pretest. The findings suggest that numerical gesture‐speech mismatches are a reliable signal that a child is ready to profit from rich number instruction and provide evidence, for the first time, that cardinal number gestures have a role to play in number‐learning.

     
    more » « less
  3. Lay Summary

    In this study, we leverage a new technology that combines eye tracking and automatic computer programs to help very young children with ASD look at social information in a more prototypical way. In a randomized controlled trial, we show that the use of this technology prevents the diminishing attention toward social information normally seen in children with ASD over the course of a single experimental session. This work represents development toward new social attention therapeutic systems that could augment current behavioral interventions.

     
    more » « less
  4. null (Ed.)
    Whereas social visual attention has been examined in computer-mediated (e.g., shared screen) or video-mediated (e.g., FaceTime) interaction, it has yet to be studied in mixed-media interfaces that combine video of the conversant along with other UI elements. We analyzed eye gaze of 37 dyads (74 participants) who were tasked with negotiating the price of a new car (as a buyer and seller) using mixed-media video conferencing under competitive or cooperative negotiation instructions (experimental manipulation). We used multidimensional recurrence quantification analysis to extract spatio-temporal patterns corresponding to mutual gaze (individuals look at each other), joint attention (individuals focus on the same elements of the interface), and gaze aversion (an individual looks at their partner, who is looking elsewhere). Our results indicated that joint attention predicted the sum of points attained by the buyer and seller (i.e., the joint score). In contrast, gaze aversion was associated with faster time to complete the negotiation, but with a lower joint score. Unexpectedly, mutual gaze was highly infrequent and unrelated to the negotiation outcomes and none of the gaze patterns predicted subjective perceptions of the negotiation. There were also no effects of gender composition or negotiation condition on the gaze patterns or negotiation outcomes. Our results suggest that social visual attention may operate differently in mixed-media collaborative interfaces than in face-to-face interaction. As mixed-media collaborative interfaces gain prominence, our work can be leveraged to inform the design of gaze-sensitive user interfaces that support remote negotiations among other tasks. 
    more » « less
  5. Abstract In this article, we present a live speech-driven, avatar-mediated, three-party telepresence system, through which three distant users, embodied as avatars in a shared 3D virtual world, can perform natural three-party telepresence that does not require tracking devices. Based on live speech input from three users, this system can real-time generate the corresponding conversational motions of all the avatars, including head motion, eye motion, lip movement, torso motion, and hand gesture. All motions are generated automatically at each user side based on live speech input, and a cloud server is utilized to transmit and synchronize motion and speech among different users. We conduct a formal user study to evaluate the usability and effectiveness of the system by comparing it with a well-known online virtual world, Second Life, and a widely-used online teleconferencing system, Skype. The user study results indicate our system can provide a measurably better telepresence user experience than the two widely-used methods. 
    more » « less