skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A Rational Model of Word Skipping in Reading: Ideal Integration of Visual and Linguistic Information
Abstract Readers intentionally do not fixate some words, thought to be those they have already identified. In a rational model of reading, these word skipping decisions should be complex functions of the particular word, linguistic context, and visual information available. In contrast, heuristic models of reading only predict additive effects of word and context features. Here we test these predictions by implementing a rational model with Bayesian inference and predicting human skipping with the entropy of this model's posterior distribution. Results showed a significant effect of the entropy in predicting skipping above a strong baseline model including word and context features. This pattern held for entropy measures from rational models with a frequency prior but not from models with a 5‐gram prior. These results suggest complex interactions between visual input and linguistic knowledge as predicted by the rational model of reading, and a dominant role of frequency in making skipping decisions.  more » « less
Award ID(s):
1734217
PAR ID:
10127064
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Topics in Cognitive Science
Volume:
12
Issue:
1
ISSN:
1756-8757
Page Range / eLocation ID:
p. 387-401
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Theories of reading posit that decisions about “where” and “when” to move the eyes are driven by visual and linguistic factors, extracted from the perceptual span and word identification span, respectively. We tested this hypothesized dissociation by masking, outside of a visible window, either the spaces between the words (to assess the perceptual span, Experiment 1) or the letters within the words (to assess the word identification span, Experiment 2). We also investigated whether deaf readers’ previously reported larger reading span was specifically linked to one of these spans. We analyzed reading rate to test overall reading efficiency, as well as average saccade length to test “where” decisions and average fixation duration to test “when” decisions. Both hearing and deaf readers’ perceptual spans extended between 10 and 14 characters, and their word identification spans extended to eight characters to the right of fixation. Despite similar sized rightward spans, deaf readers read more efficiently overall and showed a larger increase in reading rate when leftward text was available, suggesting they attend more to leftward information. Neither rightward span was specifically related to where or when decisions for either group. Our results challenge the assumed dissociation between type of reading span and type of saccade decision and indicate that reading efficiency requires access to both perceptual and linguistic information in the parafovea. 
    more » « less
  2. Abstract The detailed study of eye movements in reading has shed considerable light into how language processing unfolds in real time. Yet eye movements in reading remain inadequately studied in non-native (L2) readers, even though much of the world’s population is multilingual. Here we present a detailed analysis of the quantitative functional influences of word length, frequency, and predictability on eye movement measures in reading in a large, linguistically diverse sample of non-native English readers. We find many similar qualitative effects as in L1 readers, but crucially also a proficiency-sensitive “lexicon-context tradeoff”. The most proficient L2 readers’ eye movements approach an L1 pattern, but as L2 proficiency diminishes, readers’ eye movements become less sensitive to a word’s predictability in context and more sensitive to word frequency, which is context-invariant. This tradeoff supports a rational, experience-dependent account of how context-driven expectations are deployed in L2 language processing. 
    more » « less
  3. Abstract Visual word recognition is facilitated by the presence oforthographic neighborsthat mismatch the target word by a single letter substitution. However, researchers typically do not considerwhereneighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number ofenemiesat each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over‐time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions. 
    more » « less
  4. Abstract Deaf individuals have unique sensory and linguistic experiences that influence how they read and become skilled readers. This review presents our current understanding of the neurocognitive underpinnings of reading skill in deaf adults. Key behavioural and neuroimaging studies are integrated to build a profile of skilled adult deaf readers and to examine how changes in visual attention and reduced access to auditory input and phonology shape how they read both words and sentences. Crucially, the behaviours, processes, and neural circuity of deaf readers are compared to those of hearing readers with similar reading ability to help identify alternative pathways to reading success. Overall, sensitivity to orthographic and semantic information is comparable for skilled deaf and hearing readers, but deaf readers rely less on phonology and show greater engagement of the right hemisphere in visual word processing. During sentence reading, deaf readers process visual word forms more efficiently and may have a greater reliance on and altered connectivity to semantic information compared to their hearing peers. These findings highlight the plasticity of the reading system and point to alternative pathways to reading success. 
    more » « less
  5. Contemporary autoregressive language models (LMs) trained purely on corpus data have been shown to capture numerous features of human incremental processing. However, past work has also suggested dissociations between corpus probabilities and human next-word predictions. Here we evaluate several state-of-the-art language models for their match to human next-word predictions and to reading time behavior from eye movements. We then propose a novel method for distilling the linguistic information implicit in human linguistic predictions into pre-trained LMs: Cloze Distillation. We apply this method to a baseline neural LM and show potential improvement in reading time prediction and generalization to held-out human cloze data. 
    more » « less