In human infants, the ability to share attention with others is facilitated by increases in attentional selectivity and focus. Differences in early attention have been associated with socio‐cognitive outcomes including language, yet the social mechanisms of attention organization in early infancy have only recently been considered. Here, we examined how social coordination between 5‐month‐old infants and caregivers relate to differences in infant attention, including looking preferences, span, and reactivity to caregivers’ social cues. Using a naturalistic play paradigm, we found that 5‐month‐olds who received a high ratio of
Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words during free‐flowing toy play, we measured infants’ visual attention and manual actions on to‐be‐learned toys. Parents and 12‐to‐26‐month‐old infants wore wireless head‐mounted eye trackers, allowing them to move freely around a home‐like lab environment. After the play session, infants were tested on their knowledge of object‐label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object‐label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.
more » « less- NSF-PAR ID:
- 10442580
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Developmental Science
- Volume:
- 26
- Issue:
- 2
- ISSN:
- 1363-755X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
sensitive (jointly focused) contingent responses showed strong preferences for objects with which their caregivers were manually engaged. In contrast, infants whose caregivers exhibited high ratios ofredirection (attempts to shift focus) showed no preferences for caregivers’ held objects. Such differences have implications for recent models of cognitive development, which rely on early looking preferences for adults’ manually engaged objects as a pathway toward joint attention and word learning. Further, sensitivity and redirectiveness predicted infant attention even in reaction to caregiver responses that werenon‐referential (neither sensitive nor redirective). In response to non‐referentials, infants of highly sensitive caregivers oriented less frequently than infants of highly redirective caregivers, who showed increased distractibility. Our results suggest that specific dyadic exchanges predict infant attention differences toward broader social cues, which may have consequences for social‐cognitive outcomes. -
Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning.
Research Highlights In parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.
We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.
Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.
Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning.
-
Abstract Infant language learning depends on the distribution of co‐occurrences
within language–between words and other words–andbetween language content and events in the world. Yet infant‐directed speech is not limited to words that refer to perceivable objects and actions. Rather, caregivers’ utterances contain a range of syntactic forms and expressions with diverse attentional, regulatory, social, and referential functions. We conducted a distributional analysis of linguistic content types at the utterance level, and demonstrated that a wide range of content types in maternal speech can be distinguished by their distribution in sequences of utterances and by their patterns of co‐occurrence with infants’ actions. We observed free‐play sessions of 38 12‐month‐old infants and their mothers, annotated maternal utterances for 10 content types, and coded infants’ gaze target and object handling. Results show that all content types tended to repeat in consecutive utterances, whereas preferred transitions between different content types reflected sequences from attention‐capturing to directing and then descriptive utterances. Specific content types were associated with infants’ engagement with objects (declaratives, descriptions, object names), with disengagement from objects (talk about attention, infant's name), and with infants’ gaze at the mother (affirmations). We discuss how structured discourse might facilitate language acquisition by making speech input more predictable and/or by providing clues about high‐level form‐function mappings. -
Early object name learning is often conceptualized as a problem of mapping heard names to referents. However, infants do not hear object names as discrete events but rather in extended interactions organized around goal-directed actions on objects. The present study examined the statistical structure of the nonlinguistic events that surround parent naming of objects. Parents and 12-month-old infants were left alone in a room for 10 minutes with 32 objects available for exploration. Parent and infant handling of objects and parent naming of objects were coded. The four measured statistics were from measures used in the study of coherent discourse: (i) a frequency distribution in which actions were frequently directed to a few objects and more rarely to other objects; (ii) repeated returns to the high-frequency objects over the 10- minute play period; (iii) clustered repetitions and continuity of actions on objects; and (iv) structured networks of transitions among objects in play that connected all the played-with objects. Parent naming was infre- quent but related to the statistics of object-directed actions. The impli- cations of the discourse-like stream of actions are discussed in terms of learning mechanisms that could support rapid learning of object names from relatively few name-object co-occurrences.more » « less
-
Abstract Prior studies found that hand preference trajectories predict preschool language outcomes. However, this approach has been limited to examining bimanual manipulation in toddlers. It is not known whether hand preference during infancy for acquiring objects (i.e., reach‐to‐grasp) similarly predicts childhood language ability. The current study explored this motor‐language developmental cascade in 90 children. Hand preference for acquiring objects was assessed monthly from 6 to 14 months, and language skill was assessed at 5 years. Latent class growth analysis identified three infant hand preference classes: left, early right and late right. Infant hand preference classes predicted 5‐year language skills. Children in the left and early right classes, who were categorized as having a consistent hand preference, had higher expressive and receptive language scores relative to children in the inconsistent late right class. Consistent classes did not differ from each other on language outcomes. Infant hand preference patterns explained more variance for expressive and receptive language relative to previously reported toddler hand preference patterns, above and beyond socio‐economic status. Results suggest that hand preference, measured at different time points across development using a trajectory approach, is reliably linked to later language.
Highlights Hand preference trajectories reliably predict preschool language above and beyond SES.
Infants with a consistent hand preference for reaching had greater language skills at 5 years.
Infant hand preference explained more variance in language than toddler hand preference.