Visual word recognition is facilitated by the presence of
- Award ID(s):
- 2127135
- PAR ID:
- 10353292
- Date Published:
- Journal Name:
- Psychonomic Bulletin & Review
- ISSN:
- 1069-9384
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract orthographic neighbors that mismatch the target word by a single letter substitution. However, researchers typically do not considerwhere neighbors mismatch the target. In light of evidence that some letter positions are more informative than others, we investigate whether the influence of orthographic neighbors differs across letter positions. To do so, we quantify the number ofenemies at each letter position (how many neighbors mismatch the target word at that position). Analyses of reaction time data from a visual word naming task indicate that the influence of enemies differs across letter positions, with the negative impacts of enemies being most pronounced at letter positions where readers have low prior uncertainty about which letters they will encounter (i.e., positions with low entropy). To understand the computational mechanisms that give rise to such positional entropy effects, we introduce a new computational model, VOISeR (Visual Orthographic Input Serial Reader), which receives orthographic inputs in parallel and produces an over‐time sequence of phonemes as output. VOISeR produces a similar pattern of results as in the human data, suggesting that positional entropy effects may emerge even when letters are not sampled serially. Finally, we demonstrate that these effects also emerge in human subjects' data from a lexical decision task, illustrating the generalizability of positional entropy effects across visual word recognition paradigms. Taken together, such work suggests that research into orthographic neighbor effects in visual word recognition should also consider differences between letter positions. -
Corina, David P. (Ed.)Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL–English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL–English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent.more » « less
-
In visual word recognition, having more orthographic neighbors (words that differ by a single letter) generally speeds access to a target word. But neighbors can mismatch at any letter position. In light of evidence that information content varies between letter positions, we consider how neighbor effects might vary across letter positions. Results from a word naming task indicate that response latencies are better predicted by the relative number of positional friends and enemies (respectively, neighbors that match the target at a given letter position and those that mismatch) at some letter positions than at others. In particular, benefits from friends are most pronounced at positions associated with low a priori uncertainty (positional entropy). We consider how these results relate to previous accounts of position-specific effects and how such effects might emerge in serial and parallel processing systems.more » « less
-
Abstract Previous research has shown that, unlike misspelled common words, misspelled brand names are sensitive to visual letter similarity effects (e.g., is often recognized as a legitimate brand name, but not ). This pattern poses problems for those models that assume that word identification is exclusively based on abstract codes. Here, we investigated the role of visual letter similarity using another type of word often presented in a more homogenous format than common words: city names. We found a visual letter similarity effect for misspelled city names (e.g., was often recognized as a word, but not ) for relatively short durations of the stimuli (200 ms; Experiment 2), but not when the stimuli were presented until response (Experiment 1). Notably, misspelled common words did not show a visual letter similarity effect for brief 200- and 150-ms durations (e.g., was not as often recognized as a word than ; Experiments 3–4). These findings provide further evidence that the consistency in the format of presentations may shape the representation of words in the mental lexicon, which may be more salient in scenarios where processing resources are limited (e.g., brief exposure presentations).
-
Isik, Leyla (Ed.)After years of experience, humans become experts at perceiving letters. Is this visual capacity attained by learning specialized letter features, or by reusing general visual features previously learned in service of object categorization? To explore this question, we first measured the perceptual similarity of letters in two behavioral tasks, visual search and letter categorization. Then, we trained deep convolutional neural networks on either 26-way letter categorization or 1000-way object categorization, as a way to operationalize possible specialized letter features and general object-based features, respectively. We found that the general object-based features more robustly correlated with the perceptual similarity of letters. We then operationalized additional forms of experience-dependent letter specialization by altering object-trained networks with varied forms of letter training; however, none of these forms of letter specialization improved the match to human behavior. Thus, our findings reveal that it is not necessary to appeal to specialized letter representations to account for perceptual similarity of letters. Instead, we argue that it is more likely that the perception of letters depends on domain-general visual features.more » « less