This content will become publicly available on September 13, 2025
Vowel harmony is a phenomenon in which the vowels in a word share some features (e.g., frontness vs. backness). It occurs in several families of languages (e.g., Turkic and Finno-Ugric languages) and serves as an effective segmenting cue in continuous speech and when reading compound words. The present study examined whether vowel harmony also plays a role in visual word recognition. We chose Turkish, a language with four front and four back vowels in which approximately 75% of words are harmonious. If vowel harmony contributes to the formation of coherent phonological codes during lexical access, harmonious words will reach a stable orthographic-phonological state more rapidly than disharmonious words. To test this hypothesis, in Experiment 1, we selected two types of monomorphemic Turkish words: harmonious (containing only front vowels or back vowels) and disharmonious (containing front and back vowels)—a parallel manipulation was applied to the pseudowords. Results showed faster lexical decisions for harmonious than disharmonious words, whereas vowel harmony did not affect pseudowords. In Experiment 2, where all words were harmonious, we found a small but reliable advantage for disharmonious over harmonious pseudowords. These findings suggest that vowel harmony helps the formation of stable phonological codes in Turkish words, but it does not play a key role in pseudoword rejection.
more » « less- PAR ID:
- 10541823
- Publisher / Repository:
- SAGE Publications
- Date Published:
- Journal Name:
- Quarterly Journal of Experimental Psychology
- ISSN:
- 1747-0218
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
null (Ed.)Abstract Lexical tones are widely believed to be a formidable learning challenge for adult speakers of nontonal languages. While difficulties—as well as rapid improvements—are well documented for beginning second language (L2) learners, research with more advanced learners is needed to understand how tone perception difficulties impact word recognition once learners have a substantial vocabulary. The present study narrows in on difficulties suggested in previous work, which found a dissociation in advanced L2 learners between highly accurate tone identification and largely inaccurate lexical decision for tone words. We investigate a “best-case scenario” for advanced L2 tone word processing by testing performance in nearly ideal listening conditions—with words spoken clearly and in isolation. Under such conditions, do learners still have difficulty in lexical decision for tone words? If so, is it driven by the quality of lexical representations or by L2 processing routines? Advanced L2 and native Chinese listeners made lexical decisions while an electroencephalogram was recorded. Nonwords had a first syllable with either a vowel or tone that differed from that of a common disyllabic word. As a group, L2 learners performed less accurately when tones were manipulated than when vowels were manipulated. Subsequent analyses showed that this was the case even in the subset of items for which learners showed correct and confident tone identification in an offline written vocabulary test. Event-related potential results indicated N400 effects for both nonword conditions in L1, but only vowel N400 effects in L2, with tone responses intermediate between those of real words and vowel nonwords. These results are evidence of the persistent difficulty most L2 learners have in using tones for online word recognition, and indicate it is driven by a confluence of factors related to both L2 lexical representations and processing routines. We suggest that this tone nonword difficulty has real-world implications for learners: It may result in many toneless word representations in their mental lexicons, and is likely to affect the efficiency with which they can learn new tone words.more » « less
-
This study investigates whether short-term perceptual training can enhance Seoul-Korean listeners’ use of English lexical stress in spoken word recognition. Unlike English, Seoul Korean does not have lexical stress (or lexical pitch accents/tones). Seoul-Korean speakers at a high-intermediate English proficiency completed a visual-world eye-tracking experiment adapted from Connell et al. (2018) (pre-/post-test). The experiment tested whether pitch in the target stimulus (accented versus unaccented first syllable) and vowel quality in the lexical competitor (reduced versus full first vowel) modulated fixations to the target word (e.g., PARrot; ARson) over the competitor word (e.g., paRADE or PARish; arCHIVE or ARcade). In the training (eight 30-min sessions over eight days), participants heard English lexical-stress minimal pairs uttered by four talkers (high variability) or one talker (low variability), categorized them as noun (first-syllable stress) or verb (second-syllable stress), and received accuracy feedback. The results showed that neither training increased target-over-competitor fixation proportions. Crucially, the same training had been found to improve Seoul- Korean listeners’ recall of English words differing in lexical stress (Tremblay et al., 2022) and their weighting of acoustic cues to English lexical stress (Tremblay et al., 2023). These results suggest that short-term perceptual training has a limited effect on target-over-competitor word activation.
-
In this paper, we study speech development in children using longitudinal acoustic and articulatory data. Data were collected yearly from grade 1 to grade 4 from four female and four male children. We analyze acoustic and articulatory properties of four corner vowels: /æ/, /i/, /u/, and /A/, each occurring in two different words (different surrounding contexts). Acoustic features include formant frequencies and subglottal resonances (SGRs). Articulatory features include tongue curvature degree (TCD) and tongue curvature position (TCP). Based on the analyses, we observe the emergence of sex-based differences starting from grade 2. Similar to adults, the SGRs divide the vowel space into high, low, front, and back regions at least as early as grade 2. On average, TCD is correlated with vowel height and TCP with vowel frontness. Children in our study used varied articulatory configurations to achieve similar acoustic targets.more » « less
-
Southern American English is spoken in a large geographic region in the United States. Its characteristics include back-vowel fronting (e.g., in goose, foot, and goat), which has been ongoing since the mid-nineteenth century; meanwhile, the low back vowels (in lot and thought) have recently merged in some areas. We investigate these five vowels in the Digital Archive of Southern Speech, a legacy corpus of linguistic interviews with sixty-four speakers born 1886-1956. We extracted 89,367 vowel tokens and used generalized additive mixed-effects models to test for socially-driven changes to both their relative phonetic placements and the shapes of their formant trajectories. Our results reinforce previous descriptions of Southern vowels while contributing additional phonetic detail about their trajectories. Goose-fronting is a change in progress, with greatest fronting after coronal consonants. Goat is quite dynamic; it lowers and fronts in apparent time. Generally, women have more fronted realizations than men. Foot is largely monophthongal, and stable across time. Lot and thought are distinct and unmerged, occupying different regions of the vowel space. While their relative positions change across generations, all five vowels show a remarkable consistency in formant trajectory shapes across time. This study’s results reveal social and phonetic details about the back vowels of Southerners born in the late nineteenth and early twentieth centuries: goose-fronting was well underway, goat-fronting was beginning, but foot remained backed, and the low back vowels were unmerged.
-
This study employed the N400 event-related potential (ERP) to investigate how observing different types of gestures at learning affects the subsequent processing of L2 Mandarin words differing in lexical tone by L1 English speakers. The effects of pitch gestures conveying lexical tones (e.g., upwards diagonal movements for rising tone), semantic gestures conveying word meanings (e.g., waving goodbye for to wave), and no gesture were compared. In a lexical tone discrimination task, larger N400s for Mandarin target words mismatching vs. matching Mandarin prime words in lexical tone were observed for words learned with pitch gesture. In a meaning discrimination task, larger N400s for English target words mismatching vs. matching Mandarin prime words in meaning were observed for words learned with pitch and semantic gesture. These findings provide the first neural evidence that observing gestures during L2 word learning enhances subsequent phonological and semantic processing of learned L2 words.more » « less