skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on March 7, 2025

Title: Deaf readers use leftward information to read more efficiently: Evidence from eye tracking

Little is known about how information to the left of fixation impacts reading and how it may help to integrate what has been read into the context of the sentence. To better understand the role of this leftward information and how it may be beneficial during reading, we compared the sizes of the leftward span for reading-matched deaf signers ( n = 32) and hearing adults ( n = 40) using a gaze-contingent moving window paradigm with windows of 1, 4, 7, 10, and 13 characters to the left, as well as a no-window condition. All deaf participants were prelingually and profoundly deaf, used American Sign Language (ASL) as a primary means of communication, and were exposed to ASL before age eight. Analysis of reading rates indicated that deaf readers had a leftward span of 10 characters, compared to four characters for hearing readers, and the size of the span was positively related to reading comprehension ability for deaf but not hearing readers. These findings suggest that deaf readers may engage in continued word processing of information obtained to the left of fixation, making reading more efficient, and showing a qualitatively different reading process than hearing readers.

 
more » « less
Award ID(s):
2120507 2120546
NSF-PAR ID:
10515153
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Sage Journals
Date Published:
Journal Name:
Quarterly Journal of Experimental Psychology
ISSN:
1747-0218
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Deaf individuals have unique sensory and linguistic experiences that influence how they read and become skilled readers. This review presents our current understanding of the neurocognitive underpinnings of reading skill in deaf adults. Key behavioural and neuroimaging studies are integrated to build a profile of skilled adult deaf readers and to examine how changes in visual attention and reduced access to auditory input and phonology shape how they read both words and sentences. Crucially, the behaviours, processes, and neural circuity of deaf readers are compared to those of hearing readers with similar reading ability to help identify alternative pathways to reading success. Overall, sensitivity to orthographic and semantic information is comparable for skilled deaf and hearing readers, but deaf readers rely less on phonology and show greater engagement of the right hemisphere in visual word processing. During sentence reading, deaf readers process visual word forms more efficiently and may have a greater reliance on and altered connectivity to semantic information compared to their hearing peers. These findings highlight the plasticity of the reading system and point to alternative pathways to reading success.

     
    more » « less
  2. Abstract

    The lexical quality hypothesis proposes that the quality of phonological, orthographic, and semantic representations impacts reading comprehension. In Study 1, we evaluated the contributions of lexical quality to reading comprehension in 97 deaf and 98 hearing adults matched for reading ability. While phonological awareness was a strong predictor for hearing readers, for deaf readers, orthographic precision and semantic knowledge, not phonology, predicted reading comprehension (assessed by two different tests). For deaf readers, the architecture of the reading system adapts by shifting reliance from (coarse-grained) phonological representations to high-quality orthographic and semantic representations. In Study 2, we examined the contribution of American Sign Language (ASL) variables to reading comprehension in 83 deaf adults. Fingerspelling (FS) and ASL comprehension skills predicted reading comprehension. We suggest that FS might reinforce orthographic-to-semantic mappings and that sign language comprehension may serve as a linguistic basis for the development of skilled reading in deaf signers.

     
    more » « less
  3. Corina, David P. (Ed.)
    Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL–English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL–English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent. 
    more » « less
  4. Abstract

    We examined the impact of exposure to a signed language (American Sign Language, or ASL) at different ages on the neural systems that support spoken language phonemic discrimination in deaf individuals with cochlear implants (CIs). Deaf CI users (N = 18, age = 18–24 yrs) who were exposed to a signed language at different ages and hearing individuals (N = 18, age = 18–21 yrs) completed a phonemic discrimination task in a spoken native (English) and non-native (Hindi) language while undergoing functional near-infrared spectroscopy neuroimaging. Behaviorally, deaf CI users who received a CI early versus later in life showed better English phonemic discrimination, albeit phonemic discrimination was poor relative to hearing individuals. Importantly, the age of exposure to ASL was not related to phonemic discrimination. Neurally, early-life language exposure, irrespective of modality, was associated with greater neural activation of left-hemisphere language areas critically involved in phonological processing during the phonemic discrimination task in deaf CI users. In particular, early exposure to ASL was associated with increased activation in the left hemisphere’s classic language regions for native versus non-native language phonemic contrasts for deaf CI users who received a CI later in life. For deaf CI users who received a CI early in life, the age of exposure to ASL was not related to neural activation during phonemic discrimination. Together, the findings suggest that early signed language exposure does not negatively impact spoken language processing in deaf CI users, but may instead potentially offset the negative effects of language deprivation that deaf children without any signed language exposure experience prior to implantation. This empirical evidence aligns with and lends support to recent perspectives regarding the impact of ASL exposure in the context of CI usage.

     
    more » « less
  5. Abstract The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient–focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals. 
    more » « less