skip to main content


Title: One more trip to Barcetona: on the special status of visual similarity effects in city names
Abstract

Previous research has shown that, unlike misspelled common words, misspelled brand names are sensitive to visual letter similarity effects (e.g., is often recognized as a legitimate brand name, but not ). This pattern poses problems for those models that assume that word identification is exclusively based on abstract codes. Here, we investigated the role of visual letter similarity using another type of word often presented in a more homogenous format than common words: city names. We found a visual letter similarity effect for misspelled city names (e.g., was often recognized as a word, but not ) for relatively short durations of the stimuli (200 ms; Experiment 2), but not when the stimuli were presented until response (Experiment 1). Notably, misspelled common words did not show a visual letter similarity effect for brief 200- and 150-ms durations (e.g., was not as often recognized as a word than ; Experiments 3–4). These findings provide further evidence that the consistency in the format of presentations may shape the representation of words in the mental lexicon, which may be more salient in scenarios where processing resources are limited (e.g., brief exposure presentations).

 
more » « less
NSF-PAR ID:
10425335
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Springer Science + Business Media
Date Published:
Journal Name:
Psychological Research
Volume:
88
Issue:
1
ISSN:
0340-0727
Format(s):
Medium: X Size: p. 271-283
Size(s):
p. 271-283
Sponsoring Org:
National Science Foundation
More Like this
  1. Corina, David P. (Ed.)
    Letter recognition plays an important role in reading and follows different phases of processing, from early visual feature detection to the access of abstract letter representations. Deaf ASL–English bilinguals experience orthography in two forms: English letters and fingerspelling. However, the neurobiological nature of fingerspelling representations, and the relationship between the two orthographies, remains unexplored. We examined the temporal dynamics of single English letter and ASL fingerspelling font processing in an unmasked priming paradigm with centrally presented targets for 200 ms preceded by 100 ms primes. Event-related brain potentials were recorded while participants performed a probe detection task. Experiment 1 examined English letter-to-letter priming in deaf signers and hearing non-signers. We found that English letter recognition is similar for deaf and hearing readers, extending previous findings with hearing readers to unmasked presentations. Experiment 2 examined priming effects between English letters and ASL fingerspelling fonts in deaf signers only. We found that fingerspelling fonts primed both fingerspelling fonts and English letters, but English letters did not prime fingerspelling fonts, indicating a priming asymmetry between letters and fingerspelling fonts. We also found an N400-like priming effect when the primes were fingerspelling fonts which might reflect strategic access to the lexical names of letters. The studies suggest that deaf ASL–English bilinguals process English letters and ASL fingerspelling differently and that the two systems may have distinct neural representations. However, the fact that fingerspelling fonts can prime English letters suggests that the two orthographies may share abstract representations to some extent. 
    more » « less
  2. Abstract Letter position coding in word recognition has been widely investigated in the visual modality (e.g., labotarory is confusable with laboratory), but not as much in the tactile modality using braille, leading to an incomplete understanding of whether this process is modality-dependent. Unlike sighted readers, braille readers do not show a transposed-letter similarity effect with nonadjacent transpositions (e.g., labotarory = labodanory; Perea et al., 2012). While this latter finding was taken to suggest that the flexibility in letter position coding was due to visual factors (e.g., perceptual uncertainty in the location of visual objects (letters)), it is necessary to test whether transposed-letter effects occur with adjacent letters to reach firm conclusions. Indeed, in the auditory modality (i.e., another serial modality), a transposed-phoneme effect occurs for adjacent but not for nonadjacent transpositions. In a lexical decision task, we examined whether pseudowords created by transposing two adjacent letters of a word (e.g., laboartory) are more confusable with their base word (laboratory) than pseudowords created by replacing those letters (laboestory) in braille. Results showed that transposed-letter pseudowords produced more errors and slower responses than the orthographic controls. Thus, these findings suggest that the mechanism of serial order, while universal, can be shaped by the sensory modality at play. 
    more » « less
  3. Knowing the perceived economic value of words is often desirable for applications such as product naming and pricing. However, there is a lack of understanding on the underlying economic worths of words, even though we have seen some breakthrough on learning the semantics of words. In this work, we bridge this gap by proposing a joint-task neural network model, Word Worth Model (WWM), to learn word embedding that captures the underlying economic worths. Through the design of WWM, we incorporate contextual factors, e.g., product’s brand name and restaurant’s city, that may affect the aggregated monetary value of a textual item. Via a comprehensive evaluation, we show that, compared with other baselines, WWM accurately predicts missing words when given target words. We also show that the learned embeddings of both words and contextual factors reflect well the underlying economic worths through various visualization analyses. 
    more » « less
  4. Bilinguals occasionally produce language intrusion errors (inadvertent translations of the intended word), especially when attempting to produce function word targets, and often when reading aloud mixed-language paragraphs. We investigate whether these errors are due to a failure of attention during speech planning, or failure of monitoring speech output by classifying errors based on whether and when they were corrected, and investigating eye movement behaviour surrounding them. Prior research on this topic has primarily tested alphabetic languages (e.g., Spanish–English bilinguals) in which part of speech is confounded with word length, which is related to word skipping (i.e., decreased attention). Therefore, we tested 29 Chinese–English bilinguals whose languages differ in orthography, visually cueing language membership, and for whom part of speech (in Chinese) is less confounded with word length. Despite the strong orthographic cue, Chinese–English bilinguals produced intrusion errors with similar effects as previously reported (e.g., especially with function word targets written in the dominant language). Gaze durations did differ by whether errors were made and corrected or not, but these patterns were similar for function and content words and therefore cannot explain part of speech effects. However, bilinguals regressed to words produced as errors more often than to correctly produced words, but regressions facilitated correction of errors only for content, not for function words. These data suggest that the vulnerability of function words to language intrusion errors primarily reflects automatic retrieval and failures of speech monitoring mechanisms from stopping function versus content word errors after they are planned for production.

     
    more » « less
  5. Vowel harmony is a phenomenon in which the vowels in a word share some features (e.g., frontness vs. backness). It occurs in several families of languages (e.g., Turkic and Finno-Ugric languages) and serves as an effective segmenting cue in continuous speech and when reading compound words. The present study examined whether vowel harmony also plays a role in visual word recognition. We chose Turkish, a language with four front and four back vowels in which approximately 75% of words are harmonious. If vowel harmony contributes to the formation of coherent phonological codes during lexical access, harmonious words will reach a stable orthographic-phonological state more rapidly than disharmonious words. To test this hypothesis, in Experiment 1, we selected two types of monomorphemic Turkish words: harmonious (containing only front vowels or back vowels) and disharmonious (containing front and back vowels)—a parallel manipulation was applied to the pseudowords. Results showed faster lexical decisions for harmonious than disharmonious words, whereas vowel harmony did not affect pseudowords. In Experiment 2, where all words were harmonious, we found a small but reliable advantage for disharmonious over harmonious pseudowords. These findings suggest that vowel harmony helps the formation of stable phonological codes in Turkish words, but it does not play a key role in pseudoword rejection.

     
    more » « less