skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The role of vision in the acquisition of words: Vocabulary development in blind toddlers
Abstract What is vision's role in driving early word production? To answer this, we assessed parent‐report vocabulary questionnaires administered to congenitally blind children (N = 40, Mean age = 24 months [R: 7–57 months]) and compared the size and contents of their productive vocabulary to those of a large normative sample of sighted children (N = 6574). We found that on average, blind children showed a roughly half‐year vocabulary delay relative to sighted children, amid considerable variability. However, the content of blind and sighted children's vocabulary was statistically indistinguishable in word length, part of speech, semantic category, concreteness, interactiveness, and perceptual modality. At a finer‐grained level, we also found that words’ perceptual properties intersect with children's perceptual abilities. Our findings suggest that while an absence of visual input may initially make vocabulary development more difficult, the content of the early productive vocabulary is largely resilient to differences in perceptual access. Research HighlightsInfants and toddlers born blind (with no other diagnoses) show a 7.5 month productive vocabulary delay on average, with wide variability.Across the studied age range (7–57 months), vocabulary delays widened with age.Blind and sighted children's early vocabularies contain similar distributions of word lengths, parts of speech, semantic categories, and perceptual modalities.Blind children (but not sighted children) were more likely to say visual words which could also be experienced through other senses.  more » « less
Award ID(s):
2337766
PAR ID:
10487051
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Developmental Science
ISSN:
1363-755X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning. Research HighlightsIn parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning. 
    more » « less
  2. We investigate the roles of linguistic and sensory experience in the early-produced visual, auditory, and abstract words of congenitally-blind toddlers, deaf toddlers, and typicallysighted/ hearing peers. We also assess the role of language access by comparing early word production in children learning English or American Sign Language (ASL) from birth, versus at a delay. Using parental report data on child word production from the MacArthur-Bates Communicative Development Inventory, we found evidence that while children produced words referring to imperceptible referents before age 2, such words were less likely to be produced relative to words with perceptible referents. For instance, blind (vs. sighted) children said fewer highly visual words like “blue” or “see”; deaf signing (vs. hearing) children produced fewer auditory signs like HEAR. Additionally, in spoken English and ASL, children who received delayed language access were less likely to produce words overall. These results demonstrate and begin to quantify how linguistic and sensory access may influence which words young children produce. 
    more » « less
  3. We compared everyday language input to young congenitally-blind children with no addi- tional disabilities (N=15, 6–30 mo., M:16 mo.) and demographically-matched sighted peers (N=15, 6–31 mo., M:16 mo.). By studying whether the language input of blind children differs from their sighted peers, we aimed to determine whether, in principle, the language acquisition patterns observed in blind and sighted children could be explained by aspects of the speech they hear. Children wore LENA recorders to capture the auditory language environment in their homes. Speech in these recordings was then analyzed with a mix of automated and manually-transcribed measures across various subsets and dimensions of language input. These included measures of quantity (adult words), interaction (conversational turns and child-directed speech), linguistic properties (lexical diversity and mean length of utterance), and conceptual features (talk centered around the here-and-now; talk focused on visual referents that would be inaccessible to the blind but not sighted children). Overall, we found broad similarity across groups in speech quantitative, interactive, and linguistic properties. The only exception was that blind children’s language environments contained slightly but significantly more talk about past/future/hypothetical events than sighted children’s input; both groups received equiva- lent quantities of “visual” speech input. The findings challenge the notion that blind children’s lan- guage input diverges substantially from sighted children’s; while the input is highly variable across children, it is not systematically so across groups, across nearly all measures. The findings suggest instead that blind children and sighted children alike receive input that readily supports their language development, with open questions remaining regarding how this input may be differentially leveraged by language learners in early childhood. 
    more » « less
  4. Socioeconomic status (SES) has been repeatedly linked to the developmental trajectory of vocabulary acquisition in young children. However, the nature of this relationship remains underspecified. In particular, despite an extensive literature documenting young children's reliance on a host of skills and strategies to learn new words, little attention has been paid to whether and how these skills relate to measures of SES and vocabulary acquisition. To evaluate these relationships, we conducted two studies. In Study 1, 205 2.5‐ to 3.5‐year‐old children from widely varying socioeconomic backgrounds were tested on a broad range of word‐learning skills that tap their ability to resolve cases of ambiguous reference and to extend words appropriately. Children's executive functioning and phonological memory skills were also assessed. In Study 2, 77 of those children returned for a follow‐up session several months later, at which time two additional measures of vocabulary were obtained. Using Structural Equation Modeling (SEM) and multivariate regression, we provide evidence of the mediating role of word‐learning skills on the relationship between SES and vocabulary skill over the course of early development. 
    more » « less
  5. Abstract Wealth‐based disparities in health care wherein the poor receive undertreatment in painful conditions are a prominent issue that requires immediate attention. Research with adults suggests that these disparities are partly rooted in stereotypes associating poor individuals with pain insensitivity. However, whether and how children consider a sufferer's wealth status in their pain perceptions remains unknown. The present work addressed this question by testing 4‐ to 9‐year‐olds from the US and China. In Study 1 (N = 108, 56 girls, 79% White), US participants saw rich and poor White children experiencing identical injuries and indicated who they thought felt more pain. Although 4‐ to 6‐year‐olds responded at chance, children aged seven and above attributed more pain to the poor than to the rich. Study 2 with a new sample of US children (N = 111, 56 girls, 69% White) extended this effect to judgments of White adults’ pain. Pain judgments also informed children's prosocial behaviors, leading them to provide medical resources to the poor. Studies 3 (N = 118, 59 girls, 100% Asian) and 4 (N = 80, 40 girls, 100% Asian) found that, when evaluating White and Asian people's suffering, Chinese children began to attribute more pain to the poor than to the rich earlier than US children. Thus, unlike US adults, US children and Chinese children recognize the poor's pain from early on. These findings add to our knowledge of group‐based beliefs about pain sensitivity and have broad implications on ways to promote equitable health care. Research HighlightsFour studies examined whether 4‐ to 9‐year‐old children's pain perceptions were influenced by sufferers’ wealth status.US children attributed more pain to White individuals of low wealth status than those of high wealth status by age seven.Chinese children demonstrated an earlier tendency to attribute more pain to the poor (versus the rich) compared to US children.Children's wealth‐based pain judgments underlied their tendency to provide healthcare resources to people of low wealth status. 
    more » « less