Abstract Deaf individuals have unique sensory and linguistic experiences that influence how they read and become skilled readers. This review presents our current understanding of the neurocognitive underpinnings of reading skill in deaf adults. Key behavioural and neuroimaging studies are integrated to build a profile of skilled adult deaf readers and to examine how changes in visual attention and reduced access to auditory input and phonology shape how they read both words and sentences. Crucially, the behaviours, processes, and neural circuity of deaf readers are compared to those of hearing readers with similar reading ability to help identify alternative pathways to reading success. Overall, sensitivity to orthographic and semantic information is comparable for skilled deaf and hearing readers, but deaf readers rely less on phonology and show greater engagement of the right hemisphere in visual word processing. During sentence reading, deaf readers process visual word forms more efficiently and may have a greater reliance on and altered connectivity to semantic information compared to their hearing peers. These findings highlight the plasticity of the reading system and point to alternative pathways to reading success. 
                        more » 
                        « less   
                    This content will become publicly available on February 19, 2026
                            
                            Early Production of Imperceptible Words by Infants and Toddlers Born Deaf or Blind
                        
                    
    
            We investigate the roles of linguistic and sensory experience in the early-produced visual, auditory, and abstract words of congenitally-blind toddlers, deaf toddlers, and typicallysighted/ hearing peers. We also assess the role of language access by comparing early word production in children learning English or American Sign Language (ASL) from birth, versus at a delay. Using parental report data on child word production from the MacArthur-Bates Communicative Development Inventory, we found evidence that while children produced words referring to imperceptible referents before age 2, such words were less likely to be produced relative to words with perceptible referents. For instance, blind (vs. sighted) children said fewer highly visual words like “blue” or “see”; deaf signing (vs. hearing) children produced fewer auditory signs like HEAR. Additionally, in spoken English and ASL, children who received delayed language access were less likely to produce words overall. These results demonstrate and begin to quantify how linguistic and sensory access may influence which words young children produce. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2337766
- PAR ID:
- 10623946
- Publisher / Repository:
- MIT Press
- Date Published:
- Journal Name:
- Open Mind
- Volume:
- 9
- ISSN:
- 2470-2986
- Page Range / eLocation ID:
- 475 to 500
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Research has shown a link between the acquisition of numerical concepts and language, but exactly how linguistic input matters for numerical development remains unclear. Here, we examine both symbolic (number word knowledge) and non-symbolic (numerical discrimination) numerical abilities in a population in which access to language is limited early in development—oral deaf and hard of hearing (DHH) preschoolers born to hearing parents who do not know a sign language. The oral DHH children demonstrated lower numerical discrimination skills, verbal number knowledge, conceptual understanding of the word “more”, and vocabulary relative to their hearing peers. Importantly, however, analyses revealed that group differences in the numerical tasks, but not vocabulary, disappeared when differences in the amount of time children had had auditory access to spoken language input via hearing technology were taken into account. Results offer insights regarding the role language plays in emerging number concepts.more » « less
- 
            Abstract Limited language experience in childhood is common among deaf individuals, which prior research has shown to lead to low levels of language processing. Although basic structures such as word order have been found to be resilient to conditions of sparse language input in early life, whether they are robust to conditions of extreme language delay is unknown. The sentence comprehension strategies of post‐childhood, first‐language (L1) learners of American Sign Language (ASL) with at least 9 years of language experience were investigated, in comparison to two control groups of learners with full access to language from birth (deaf native signers and hearing L2 learners who were native English speakers). The results of a sentence‐to‐picture matching experiment show that event knowledge overrides word order for post‐childhood L1 learners, regardless of the animacy of the subject, while both deaf native signers and hearing L2 signers consistently rely on word order to comprehend sentences. Language inaccessibility throughout early childhood impedes the acquisition of even basic word order. Similar to the strategies used by very young children prior to the development of basic sentence structure, post‐childhood L1 learners rely more on context and event knowledge to comprehend sentences. Language experience during childhood is critical to the development of basic sentence structure.more » « less
- 
            Abstract What is vision's role in driving early word production? To answer this, we assessed parent‐report vocabulary questionnaires administered to congenitally blind children (N = 40, Mean age = 24 months [R: 7–57 months]) and compared the size and contents of their productive vocabulary to those of a large normative sample of sighted children (N = 6574). We found that on average, blind children showed a roughly half‐year vocabulary delay relative to sighted children, amid considerable variability. However, the content of blind and sighted children's vocabulary was statistically indistinguishable in word length, part of speech, semantic category, concreteness, interactiveness, and perceptual modality. At a finer‐grained level, we also found that words’ perceptual properties intersect with children's perceptual abilities. Our findings suggest that while an absence of visual input may initially make vocabulary development more difficult, the content of the early productive vocabulary is largely resilient to differences in perceptual access. Research HighlightsInfants and toddlers born blind (with no other diagnoses) show a 7.5 month productive vocabulary delay on average, with wide variability.Across the studied age range (7–57 months), vocabulary delays widened with age.Blind and sighted children's early vocabularies contain similar distributions of word lengths, parts of speech, semantic categories, and perceptual modalities.Blind children (but not sighted children) were more likely to say visual words which could also be experienced through other senses.more » « less
- 
            Abstract Psycholinguistic research on children's early language environments has revealed many potential challenges for language acquisition. One is that in many cases, referents of linguistic expressions are hard to identify without prior knowledge of the language. Likewise, the speech signal itself varies substantially in clarity, with some productions being very clear, and others being phonetically reduced, even to the point of uninterpretability. In this study, we sought to better characterize the language‐learning environment of American English‐learning toddlers by testing how well phonetic clarity and referential clarity align in infant‐directed speech. Using an existing Human Simulation Paradigm (HSP) corpus with referential transparency measurements and adding new measures of phonetic clarity, we found that the phonetic clarity of words’ first mentions significantly predicted referential clarity (how easy it was to guess the intended referent from visual information alone) at that moment. Thus, when parents’ speech was especially clear, the referential semantics were also clearer. This suggests that young children could use the phonetics of speech to identify globally valuable instances that support better referential hypotheses, by homing in on clearer instances and filtering out less‐clear ones. Such multimodal “gems” offer special opportunities for early word learning. Research HighlightsIn parent‐infant interaction, parents’ referential intentions are sometimes clear and sometimes unclear; likewise, parents’ pronunciation is sometimes clear and sometimes quite difficult to understand.We find that clearer referential instances go along with clearer phonetic instances, more so than expected by chance.Thus, there are globally valuable instances (“gems”) from which children could learn about words’ pronunciations and words’ meanings at the same time.Homing in on clear phonetic instances and filtering out less‐clear ones would help children identify these multimodal “gems” during word learning.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
