skip to main content


This content will become publicly available on August 23, 2024

Title: Relating referential clarity and phonetic clarity in infant‐directed speech
Abstract Research Highlights

In parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.

We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.

Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.

Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning.

 
more » « less
Award ID(s):
1917608
NSF-PAR ID:
10445769
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Developmental Science
ISSN:
1363-755X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Research Highlights

    Infants and toddlers born blind (with no other diagnoses) show a 7.5 month productive vocabulary delay on average, with wide variability.

    Across the studied age range (7–57 months), vocabulary delays widened with age.

    Blind and sighted children's early vocabularies contain similar distributions of word lengths, parts of speech, semantic categories, and perceptual modalities.

    Blind children (but not sighted children) were more likely to say visual words which could also be experienced through other senses.

     
    more » « less
  2. Abstract

    Most research on early language learning focuses on the objects that infants see and the words they hear in their daily lives, although growing evidence suggests that motor development is also closely tied to language development. To study the real‐time behaviors required for learning new words during free‐flowing toy play, we measured infants’ visual attention and manual actions on to‐be‐learned toys. Parents and 12‐to‐26‐month‐old infants wore wireless head‐mounted eye trackers, allowing them to move freely around a home‐like lab environment. After the play session, infants were tested on their knowledge of object‐label mappings. We found that how often parents named objects during play did not predict learning, but instead, it was infants’ attention during and around a labeling utterance that predicted whether an object‐label mapping was learned. More specifically, we found that infant visual attention alone did not predict word learning. Instead, coordinated, multimodal attention–when infants’ hands and eyes were attending to the same object–predicted word learning. Our results implicate a causal pathway through which infants’ bodily actions play a critical role in early word learning.

     
    more » « less
  3. Abstract

    Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child‐directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input (the Intentional Model; Frank et al., 2009) generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, the model shows a significant decrease in performance. In the next experiment, we propose the ME Model, a modified Bayesian model, which approximates infants’ mutual exclusivity bias to support the differential demands of monolingual and bilingual learning situations. The extended model is assessed using the same corpora of real child‐directed speech, showing that its performance is more robust against varying input and less dependent than the Intentional Model on optimization of its parsimony parameter. We argue that both monolingual and bilingual demands on word learning are important considerations for a computational model, as they can yield significantly different results than when only one such context is considered.

     
    more » « less
  4. Abstract Research Highlights

    Children with reading disabilities (RD) frequently have a co‐occurring math disability (MD), but the mechanisms behind this high comorbidity are not well understood.

    We examined differences in phonological awareness, reading skills, and executive function between children with RD only versus co‐occurring RD+MD using behavioral and fMRI measures.

    Children with RD only versus RD+MD did not differ in their phonological processing, either behaviorally or in the brain.

    RD+MD was associated with additional behavioral difficulties in working memory, and reduced visual cortex activation during a visuospatial working memory task.

     
    more » « less
  5. Abstract Research Highlights

    Children and adults conceptually and perceptually categorize speech and song from age 4.

    Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song.

    Acoustic cue weighting changes with age, becoming adult‐like at age 8 for perceptual categorization and at age 12 for conceptual differentiation.

    Young children are still learning to categorize speech and song, which leaves open the possibility that music‐ and language‐specific skills are not so domain‐specific.

     
    more » « less