Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words during free‐flowing toy play, we measured infants’ visual attention and manual actions on to‐be‐learned toys. Parents and 12‐to‐26‐month‐old infants wore wireless head‐mounted eye trackers, allowing them to move freely around a home‐like lab environment. After the play session, infants were tested on their knowledge of object‐label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object‐label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.
Parental responsiveness to infant behaviors is a strong predictor of infants' language and cognitive outcomes. The mechanisms underlying this effect, however, are relatively unknown. We examined the effects of parent speech on infants' visual attention, manual actions, hand‐eye coordination, and dyadic joint attention during parent‐infant free play. We report on two studies that used head‐mounted eye trackers in increasingly naturalistic laboratory environments. In Study 1, 12‐to‐24‐month‐old infants and their parents played on the floor of a seminaturalistic environment with 24 toys. In Study 2, a different sample of dyads played in a home‐like laboratory with 10 toys and no restrictions on their movement. In both studies, we present evidence that responsive parent speech extends the duration of infants' multimodal attention. This social “boost” of parent speech impacts multiple behaviors that have been linked to later outcomes—visual attention, manual actions, hand‐eye coordination, and joint attention. Further, the amount that parents talked during the interaction was negatively related to the effects of parent speech on infant attention. Together, these results provide evidence of a trade‐off between quantity of speech and its effects, suggesting multiple pathways through which parents impact infants' multimodal attention to shape the moment‐by‐moment dynamics of an interaction.
more » « less- NSF-PAR ID:
- 10443945
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Infancy
- Volume:
- 27
- Issue:
- 6
- ISSN:
- 1525-0008
- Page Range / eLocation ID:
- p. 1154-1178
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract The present article investigated the composition of different joint gaze components used to operationalize various types of coordinated attention between parents and infants and which types of coordinated attention were associated with future vocabulary size. Twenty‐five 9‐month‐old infants and their parents wore head‐mounted eye trackers as they played with objects together. With high‐density gaze data, a variety of coordinated attention bout types were quantitatively measured by combining different gaze components, such as mutual gaze, joint object looks, face looks, and triadic gaze patterns. The key components of coordinated attention that were associated with vocabulary size at 12 and 15 months included the simultaneous combination of parent triadic gaze and infant object looking. The results from this article are discussed in terms of the importance of parent attentional monitoring and infant sustained attention for language development.
-
Abstract Social motivation—the psychobiological predisposition for social orienting, seeking social contact, and maintaining social interaction—manifests in early infancy and is hypothesized to be foundational for social communication development in typical and atypical populations. However, the lack of infant social‐motivation measures has hindered delineation of associations between infant social motivation, other early‐arising social abilities such as joint attention, and language outcomes. To investigate how infant social motivation contributes to joint attention and language, this study utilizes a mixed longitudinal sample of 741 infants at high (HL = 515) and low (LL = 226) likelihood for ASD. Using moderated nonlinear factor analysis (MNLFA), we incorporated items from parent‐report measures to establish a novel latent factor model of infant social motivation that exhibits measurement invariance by age, sex, and familial ASD likelihood. We then examined developmental associations between 6‐ and 12‐month social motivation, joint attention at 12–15 months, and language at 24 months of age. On average, greater social‐motivation growth from 6–12 months was associated with greater initiating joint attention (IJA) and trend‐level increases in sophistication of responding to joint attention (RJA). IJA and RJA were both positively associated with 24‐month language abilities. There were no additional associations between social motivation and future language in our path model. These findings substantiate a novel, theoretically driven approach to modeling social motivation and suggest a developmental cascade through which social motivation impacts other foundational skills. These findings have implications for the timing and nature of intervention targets to support social communication development in infancy.
Highlights We describe a novel, theoretically based model of infant social motivation wherein multiple parent‐reported indicators contribute to a unitary latent social‐motivation factor.
Analyses revealed social‐motivation factor scores exhibited measurement invariance for a longitudinal sample of infants at high and low familial ASD likelihood.
Social‐motivation growth from ages 6–12 months is associated with better 12−15‐month joint attention abilities, which in turn are associated with greater 24‐month language skills.
Findings inform timing and targets of potential interventions to support healthy social communication in the first year of life.
-
Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning.
Research Highlights In parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.
We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.
Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.
Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning.
-
Abstract Object names are a major component of early vocabularies and learning object names depends on being able to visually recognize objects in the world. However, the fundamental visual challenge of the moment‐to‐moment variations in object appearances that learners must resolve has received little attention in word learning research. Here we provide the first evidence that image‐level object variability matters and may be the link that connects infant object manipulation to vocabulary development. Using head‐mounted eye tracking, the present study objectively measured individual differences in the moment‐to‐moment variability of visual instances of the same object, from infants’ first‐person views. Infants who generated more variable visual object images through manual object manipulation at 15 months of age experienced greater vocabulary growth over the next six months. Elucidating infants’ everyday visual experiences with objects may constitute a crucial missing link in our understanding of the developmental trajectory of object name learning.