Background/Objectives: The Implicit Prosody Hypothesis (IPH) posits that individuals generate internal prosodic representations during silent reading, mirroring those produced in spoken language. While converging behavioral evidence supports the IPH, the underlying neurocognitive mechanisms remain largely unknown. Therefore, this study investigated the neurophysiological markers of sensitivity to speech rhythm cues during silent word reading. Methods: EEGs were recorded while participants silently read four-word sequences, each composed of either trochaic words (stressed on the first syllable) or iambic words (stressed on the second syllable). Each sequence was followed by a target word that was either metrically congruent or incongruent with the preceding rhythmic pattern. To investigate the effects of metrical expectancy and lexical stress type, we examined single-trial event-related potentials (ERPs) and time–frequency representations (TFRs) time-locked to target words. Results: The results showed significant differences based on the stress pattern expectancy and type. Specifically, words that carried unexpected stress elicited larger ERP negativities between 240 and 628 ms after the word onset. Furthermore, different frequency bands were sensitive to distinct aspects of the rhythmic structure in language. Alpha activity tracked the rhythmic expectations, and theta and beta activities were sensitive to both the expected rhythms and specific locations of the stressed syllables. Conclusions: The findings clarify neurocognitive mechanisms of phonological and lexical mental representations during silent reading using a conservative data-driven approach. Similarity with neural response patterns previously reported for spoken language contexts suggests shared neural networks for implicit and explicit speech rhythm processing, further supporting the IPH and emphasizing the centrality of prosody in reading. 
                        more » 
                        « less   
                    
                            
                            Good-Enough Production: Selecting Easier Words Instead of More Accurate Ones
                        
                    
    
            Dominant theories of language production suggest that word choice—lexical selection—is driven by alignment with the intended message: To talk about a young feline, we choose the most aligned word, kitten. Another factor that could shape lexical selection is word accessibility, or how easy it is to produce a given word (e.g., cat is more accessible than kitten). To test whether producers are also influenced by word accessibility, we designed an artificial lexicon containing high- and low-frequency words whose meanings correspond to compass directions. Participants in a communication game (total N = 181 adults) earned points by producing compass directions, which often required an implicit decision between a high- and low-frequency word. A trade-off was observed across four experiments; specifically, high-frequency words were produced even when less aligned with messages. These results suggest that implicit decisions between words are impacted by accessibility. Of all the times that people have produced cat, sometimes they likely meant kitten. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1849236
- PAR ID:
- 10371588
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Psychological Science
- Volume:
- 33
- Issue:
- 9
- ISSN:
- 0956-7976
- Format(s):
- Medium: X Size: p. 1440-1451
- Size(s):
- p. 1440-1451
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            null (Ed.)Abstract Lexical tones are widely believed to be a formidable learning challenge for adult speakers of nontonal languages. While difficulties—as well as rapid improvements—are well documented for beginning second language (L2) learners, research with more advanced learners is needed to understand how tone perception difficulties impact word recognition once learners have a substantial vocabulary. The present study narrows in on difficulties suggested in previous work, which found a dissociation in advanced L2 learners between highly accurate tone identification and largely inaccurate lexical decision for tone words. We investigate a “best-case scenario” for advanced L2 tone word processing by testing performance in nearly ideal listening conditions—with words spoken clearly and in isolation. Under such conditions, do learners still have difficulty in lexical decision for tone words? If so, is it driven by the quality of lexical representations or by L2 processing routines? Advanced L2 and native Chinese listeners made lexical decisions while an electroencephalogram was recorded. Nonwords had a first syllable with either a vowel or tone that differed from that of a common disyllabic word. As a group, L2 learners performed less accurately when tones were manipulated than when vowels were manipulated. Subsequent analyses showed that this was the case even in the subset of items for which learners showed correct and confident tone identification in an offline written vocabulary test. Event-related potential results indicated N400 effects for both nonword conditions in L1, but only vowel N400 effects in L2, with tone responses intermediate between those of real words and vowel nonwords. These results are evidence of the persistent difficulty most L2 learners have in using tones for online word recognition, and indicate it is driven by a confluence of factors related to both L2 lexical representations and processing routines. We suggest that this tone nonword difficulty has real-world implications for learners: It may result in many toneless word representations in their mental lexicons, and is likely to affect the efficiency with which they can learn new tone words.more » « less
- 
            This study investigates whether short-term perceptual training can enhance Seoul-Korean listeners’ use of English lexical stress in spoken word recognition. Unlike English, Seoul Korean does not have lexical stress (or lexical pitch accents/tones). Seoul-Korean speakers at a high-intermediate English proficiency completed a visual-world eye-tracking experiment adapted from Connell et al. (2018) (pre-/post-test). The experiment tested whether pitch in the target stimulus (accented versus unaccented first syllable) and vowel quality in the lexical competitor (reduced versus full first vowel) modulated fixations to the target word (e.g., PARrot; ARson) over the competitor word (e.g., paRADE or PARish; arCHIVE or ARcade). In the training (eight 30-min sessions over eight days), participants heard English lexical-stress minimal pairs uttered by four talkers (high variability) or one talker (low variability), categorized them as noun (first-syllable stress) or verb (second-syllable stress), and received accuracy feedback. The results showed that neither training increased target-over-competitor fixation proportions. Crucially, the same training had been found to improve Seoul- Korean listeners’ recall of English words differing in lexical stress (Tremblay et al., 2022) and their weighting of acoustic cues to English lexical stress (Tremblay et al., 2023). These results suggest that short-term perceptual training has a limited effect on target-over-competitor word activation.more » « less
- 
            The Time-Invariant String Kernel (TISK) model of spoken word recognition (Hanngan et al., 2013) is an interactive activation model like TRACE (McClelland & Elman, 1986). However, it uses orders of magnitude fewer nodes and connections because it replaces TRACE's time-specific duplicates of phoneme and word nodes with time-invariant nodes based on a string kernel representation (essentially a phoneme-by-phoneme matrix, where a word is encoded as by all ordered open diphones it contains; e.g., cat has /kæ/, /æt/, and /kt/). Hannagan et al. (2013) showed that TISK behaves similarly to TRACE in the time course of phonological competition and even word-specific recognition times. However, the original implementation did not include feedback from words to diphone nodes, precluding simulation of top-down effects. Here, we demonstrate that TISK can be easily adapted to lexical feedback, affording simulation of top-down effects as well as allowing the model to demonstrate graceful degradation given noisy inputs.more » « less
- 
            We examined how phonological competition effects in spoken word recognition change with word length. Cohort effects (competition between words that overlap at onset) are strong and easily replicated. Rhyme effects (competition between words that mismatch at onset) are weaker, emerge later in the time course of spoken word recognition, and are more difficult to replicate. We conducted a simple experiment to examine cohort and rhyme competition using monosyllabic vs. bisyllabic words. Degree of competition was predicted by proportion of phonological overlap. Longer rhymes, with greater overlap in both number and proportion of shared phonemes, compete more strongly (e.g., kettle-medal [0.8 overlap] vs. cat-mat [0.67 overlap]). In contrast, long and short cohort pairs constrained to have constant (2-phoneme) overlap vary in proportion of overlap. Longer cohort pairs (e.g., camera-candle) have lower proportion of overlap (in this example, 0.33) than shorter cohorts (e.g., cat-can, with 0.67 overlap) and compete more weakly. This finding has methodological implications (rhyme effects are less likely to be observed with shorter words, while cohort effects are diminished for longer words), but also theoretical implications: degree of competition is not a simple function of overlapping phonemes; degree of competition is conditioned on proportion of overlap. Simulations with TRACE help explicate how this result might emerge.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
