We asked whether increased exposure to iambs, two-syllable words with stress on the second syllable (e.g., guitar), by way of another language – Spanish – facilitates English learning infants' segmentation of iambs. Spanish has twice as many iambic words (40%) compared to English (20%). Using the Headturn Preference Procedure we tested bilingual Spanish and English learning 8-month-olds' ability to segment English iambs. Monolingual English learning infants succeed at this task only by 11 months. We showed that at 8 months, bilingual Spanish and English learning infants successfully segmented English iambs, and not simply the stressed syllable, unlike their monolingual English learning peers. At the same age, bilingual infants failed to segment Spanish iambs, just like their monolingual Spanish peers. These results cannot be explained by bilingual infants' reliance on transitional probability cues to segment words in both their native languages because statistical cues were comparable in the two languages. Instead, based on their accelerated development, we argue for autonomous but interdependent development of the two languages of bilingual infants.
more »
« less
This content will become publicly available on September 4, 2025
English‐learning infants developing sensitivity to vowel phonotactic cues to word segmentation
Abstract Previous research has shown that when domain‐general transitional probability (TP) cues to word segmentation are in conflict with language‐specific stress cues, English‐learning 5‐ and 7‐month‐olds rely on TP, whereas 9‐month‐olds rely on stress. In two artificial languages, we evaluated English‐learning infants’ sensitivity to TP cues to word segmentation vis‐a‐vis language‐specific vowel phonotactic (VP) cues—English words do not end in lax vowels. These cues were either consistent or conflicting. When these cues were in conflict, 10‐month‐olds relied on the VP cues, whereas 5‐month‐olds relied on TP. These findings align with statistical bootstrapping accounts, where infants initially use domain‐general distributional information for word segmentation, and subsequently discover language‐specific patterns based on segmented words. Research HighlightsResearch indicates that when transitional probability (TP) conflicts with stress cues for word segmentation, English‐learning 9‐month‐olds rely on stress, whereas younger infants rely on TP.In two artificial languages, we evaluated English‐learning infants’ sensitivity to TP versus vowel phonotactic (VP) cues for word segmentation.When these cues conflicted, 10‐month‐olds relied on VPs, whereas 5‐month‐olds relied on TP.These findings align with statistical bootstrapping accounts, where infants first utilize domain‐general distributional information for word segmentation, and then identify language‐specific patterns from segmented words.
more »
« less
- PAR ID:
- 10539829
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- Developmental Science
- ISSN:
- 1363-755X
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Each language has its unique way to mark grammatical information such as gender, number and tense. For example, English marks number and tense/aspect information with morphological suffixes (e.g., ‐sor ‐ed). These morphological suffixes are crucial for language acquisition as they are the basic building blocks of syntax, encode relationships, and convey meaning. Previous research shows that English‐learning infants recognize morphological suffixes attached to nonce words by the end of the first year, although even 8‐month‐olds recognize them when they are attached to known words. These results support an acquisition trajectory where discovery of meaning guides infants' acquisition of morphological suffixes. In this paper, we re‐evaluated English–learning infants' knowledge of morphological suffixes in the first year of life. We found that 6–month–olds successfully segmented nonce words suffixed with–s,–ing,–edand a pseudo‐morpheme ‐sh. Additionally, they related nonce words suffixed with–s, but not ‐ing, ‐edor a pseudo‐morpheme–shand stems. By 8–months, infants were also able to relate nonce words suffixed with–ingand stems. Our results show that infants demonstrate knowledge of morphological relatedness from the earliest stages of acquisition. They do so even in the absence of access to meaning. Based on these results, we argue for a developmental timeline where the acquisition of morphology is, at least, concurrent with the acquisition of phonology and meaning.more » « less
-
Abstract To efficiently recognize words, children learning an intonational language like English should avoid interpreting pitch‐contour variation as signaling lexical contrast, despite the relevance of pitch at other levels of structure. Thus far, the developmental time‐course with which English‐learning children rule out pitch as a contrastive feature has been incompletely characterized. Prior studies have tested diverse lexical contrasts and have not tested beyond 30 months. To specify the developmental trajectory over a broader age range, we extended a prior study (Quam & Swingley, 2010), in which 30‐month‐olds and adults disregarded pitch changes, but attended to vowel changes, in newly learned words. Using the same phonological contrasts, we tested 3‐ to 5‐year‐olds, 24‐month‐olds, and 18‐month‐olds. The older two groups were tested using the language‐guided‐looking method. The oldest group attended to vowels but not pitch. Surprisingly, 24‐month‐olds ignored not just pitch but sometimes vowels as well—conflicting with prior findings of phonological constraint at 24 months. The youngest group was tested using the Switch habituation method, half with additional phonetic variability in training. Eighteen‐month‐olds learned both pitch‐contrasted and vowel‐contrasted words, whether or not additional variability was present. Thus, native‐language phonological constraint was not evidenced prior to 30 months (Quam & Swingley, 2010). We contextualize our findings within other recent work in this area.more » « less
-
Abstract What is vision's role in driving early word production? To answer this, we assessed parent‐report vocabulary questionnaires administered to congenitally blind children (N = 40, Mean age = 24 months [R: 7–57 months]) and compared the size and contents of their productive vocabulary to those of a large normative sample of sighted children (N = 6574). We found that on average, blind children showed a roughly half‐year vocabulary delay relative to sighted children, amid considerable variability. However, the content of blind and sighted children's vocabulary was statistically indistinguishable in word length, part of speech, semantic category, concreteness, interactiveness, and perceptual modality. At a finer‐grained level, we also found that words’ perceptual properties intersect with children's perceptual abilities. Our findings suggest that while an absence of visual input may initially make vocabulary development more difficult, the content of the early productive vocabulary is largely resilient to differences in perceptual access. Research HighlightsInfants and toddlers born blind (with no other diagnoses) show a 7.5 month productive vocabulary delay on average, with wide variability.Across the studied age range (7–57 months), vocabulary delays widened with age.Blind and sighted children's early vocabularies contain similar distributions of word lengths, parts of speech, semantic categories, and perceptual modalities.Blind children (but not sighted children) were more likely to say visual words which could also be experienced through other senses.more » « less
-
Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning. Research HighlightsIn parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning.more » « less