skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, April 16 until 2:00 AM ET on Friday, April 17 due to maintenance. We apologize for the inconvenience.


Title: The clarity of word repetitions in American English infant-directed speech
Abstract Words in infant-directed speech (IDS) are often phonetically reduced. This likely renders words harder for infants to learn and recognize. This difficulty might be mitigated by the repetitive nature of IDS, in particular if reduced instances are often preceded by clear instances (i.e., the first-mention effect). To characterize phonetic clarity in American English word repetitions, words were extracted from the IDS of eight mothers and presented to adults (n = 36) who judged their clarity. First mentions of repeated words were found to be clearer than second mentions, though this effect was small. Clarity was rated as greater for less common words and for utterance-final words. Clarity was also greater for words parents thought their child knew. The results help guide intuitions about the phonetic problem infants face when learning their first words.  more » « less
Award ID(s):
2444175 1917608
PAR ID:
10668922
Author(s) / Creator(s):
Publisher / Repository:
Cambridge University Press
Date Published:
Journal Name:
Journal of Child Language
ISSN:
0305-0009
Page Range / eLocation ID:
1 to 20
Subject(s) / Keyword(s):
infant development phonetic clarity word learning
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning. Research Highlights In parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand. We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance. Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time. Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning. 
    more » « less
  2. Not AvailableThe goal of this research is to understand how bilingual and monolingual parents adjust their speech when talking to infants. We examined pitch characteristics of infant-directed speech (IDS) and adult-directed speech (ADS) with Spanish-English bilingual and English monolingual parents and their infants (8–20 months of age). Thirty-eight parent-infant dyads participated in two naturalistic play tasks. Parents spoke with a bilingual researcher to collect samples of ADS. Results showed that both parent groups produced higher maximum and average fundamental frequency in IDS than ADS, suggesting that caregivers adjust pitch similarly in IDS across registers. However, for bilinguals, the IDS versus ADS difference was larger in English than Spanish; bilingual parents differentiated IDS adjustments across languages. The analyses across word repetitions revealed that in bilingual parents’ IDS, there was no change in pitch across the first and second repetitions of target words, even when repetitions occurred in different languages. Taken together, results suggest that bilingual parents adjust their IDS pitch similarly to English-speaking monolinguals, but they differentiate English and Spanish IDS adjustments. Overall, this project contributes to our understanding of parents’ register adjustments across multilingual language learning contexts. 
    more » « less
  3. To begin learning their language, infants must locate words in the speech signal. Some models of word discovery presuppose that the discovery process depends on identifying phonetic segments (phones) in speech. To test the plausibility of models arguing that infants can reliably categorize consonants in speech, adult native speakers were asked to identify the consonant in vowel-consonant-vowel sequences extracted from spontaneous English infant-directed speech. Listeners could consistently identify some instances of consonants (for example, correctly indicating that an /s/ was an /s/). But many tokens (about half) were not consistently identifiable. Performance was significantly worse for codas than onsets. Providing the full utterance context in low-pass-filtered form did not aid recognition, nor did familiarization with the talker. In a second task, listeners were barely above chance in guessing whether a consonant was a word onset or a word-final coda. Performance on infant-directed speech was not markedly better than performance on a comparison set of adult-directed speech consonants. Erroneous responses frequently had little systematic resemblance to the correct answer. The results suggest that it is not plausible that infants can parse most utterances exhaustively into strings of uttered speech sounds and feed those strings into a statistical clustering mechanism. 
    more » « less
  4. Abstract Children's daily contexts shape their experiences. In this study, we assessed whether variations in infant placement (e.g., held, bouncy seat) are associated with infants' exposure to adult speech. Using repeated survey sampling of mothers and continuous audio recordings, we tested whether the use of independence‐supporting placements was associated with adult speech exposure in a Southeastern U.S. sample of 60 4‐ to 6‐month‐old infants (38% male, predominately White, not Hispanic/Latinx, from higher socioeconomic status households). Within‐subject analyses indicated that independence‐supporting placements were associated with exposure to fewer adult words in the moment. Between‐subjects analyses indicated that infants more frequently reported to be in independence‐supporting placements that also provided posture support (i.e., an exersaucer) were exposed to relatively fewer adult words and less consistent adult speech across the day. These findings indicate that infants' opportunities for exposure to adult speech ‘in the wild’ may vary based on immediate physical context. 
    more » « less
  5. In the first year of life, infants' speech perception becomes attuned to the sounds of their native language. Many accounts of this early phonetic learning exist, but computational models predicting the attunement patterns observed in infants from the speech input they hear have been lacking. A recent study presented the first such model, drawing on algorithms proposed for unsupervised learning from naturalistic speech, and tested it on a single phone contrast. Here we study five such algorithms, selected for their potential cognitive relevance. We simulate phonetic learning with each algorithm and perform tests on three phone contrasts from different languages, comparing the results to infants' discrimination patterns. The five models display varying degrees of agreement with empirical observations, showing that our approach can help decide between candidate mechanisms for early phonetic learning, and providing insight into which aspects of the models are critical for capturing infants' perceptual development. 
    more » « less