Purpose The “bubble noise” technique has recently been introduced as a method to identify the regions in time–frequency maps (i.e., spectrograms) of speech that are especially important for listeners in speech recognition. This technique identifies regions of “importance” that are specific to the speech stimulus and the listener, thus permitting these regions to be compared across different listener groups. For example, in cross-linguistic and second-language (L2) speech perception, this method identifies differences in regions of importance in accomplishing decisions of phoneme category membership. This research note describes the application of bubble noise to the study of language learning for 3 different language pairs: Hindi English bilinguals' perception of the /v/–/w/ contrast in American English, native English speakers' perception of the tense/lax contrast for Korean fricatives and affricates, and native English speakers' perception of Mandarin lexical tone. Conclusion We demonstrate that this technique provides insight on what information in the speech signal is important for native/first-language listeners compared to nonnative/L2 listeners. Furthermore, the method can be used to examine whether L2 speech perception training is effective in bringing the listener's attention to the important cues.
more »
« less
Beyond the Iambic-Trochaic Law: the joint influence of duration and intensity on the perception of rhythmic speech
The Iambic-Trochaic Law (ITL) asserts that listeners associate greater acoustic intensity with group beginnings and greater duration with group endings. Some researchers have assumed a natural connection between these perceptual tendencies and universal principles underlying linguistic categories of rhythm. The experimental literature on ITL effects is limited in three ways. Few studies of listeners' perceptions of alternating sound sequences have used speech-like stimuli, cross-linguistic testing has been inadequate and existing studies have manipulated intensity and duration singly, whereas these features vary together in natural speech. This paper reports the results of three experiments conducted with native Zapotec speakers and one with native English speakers. We tested listeners' grouping biases using streams of alternating syllables in which intensity and duration were varied separately, and sequences in which they were covaried. The findings suggest that care should be taken in assuming a natural connection between the ITL and universal principles of prosodic organisation.
more »
« less
- Award ID(s):
- 1147959
- PAR ID:
- 10160666
- Date Published:
- Journal Name:
- Phonology
- Volume:
- 31
- Issue:
- 1
- ISSN:
- 0952-6757
- Page Range / eLocation ID:
- 51 to 94
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Period-doubled voice consists of two alternating periods with multiple frequencies and is often perceived as rough with an indeterminate pitch. Past pitch-matching studies in period-doubled voice found that the perceived pitch was lower as the degree of amplitude and frequency modulation between the two alternating periods increased. The perceptual outcome also differed across f0s and modulation types: a lower f0 prompted earlier identification of a lower pitch, and the matched pitch dropped more quickly in frequency- than amplitude-modulated tokens (Sun & Xu, 2002; Bergan & Titze, 2001). However, it is unclear how listeners perceive period doubling when identifying linguistic tones. In an artificial language learning paradigm, this study used resynthesized stimuli with alternating amplitudes and/or frequencies of varying degrees, based on a production study of period-doubled voice (Huang, 2022). Listeners were native speakers of English and Mandarin. We confirm the positive relationship between the modulation degree and the proportion of low tones heard, and find that frequency modulation biased listeners to choose more low-tone options than amplitude modulation. However, a higher f0 (300 Hz) leads to a low-tone percept in more amplitude-modulated tokens than a lower f0 (200 Hz). Both English and Mandarin listeners behaved similarly, suggesting that pitch perception during period doubling is not language-specific. Furthermore, period doubling is predicted to signal low tones in languages, even when the f0 is high.more » « less
-
Human listeners are better at telling apart speakers of their native language than speakers of other languages, a phenomenon known as the language familiarity effect. The recent observation of such an effect in infants as young as 4.5 months of age (Fecher & Johnson, in press) has led to new difficulties for theories of the effect. On the one hand, retaining classical accounts—which rely on sophisticated knowledge of the native language (Goggin, Thompson, Strube, & Simental, 1991)–requires an explanation of how infants could acquire this knowledge so early. On the other hand, letting go of these accounts requires an explanation of how the effect could arise in the absence of such knowledge. In this paper, we build on algorithms from unsupervised machine learning and zero-resource speech technology to propose, for the first time, a feasible acquisition mechanism for the language familiarity effect in infants. Our results show how, without relying on sophisticated linguistic knowledge, infants could develop a language familiarity effect through statistical modeling at multiple time-scales of the acoustics of the speech signal to which they are exposed.more » « less
-
Abstract Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners’ language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a “release from masking” from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a “one-man bilingual cocktail party” selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin–English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin–English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the “cocktail party” paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The “one-man bilingual cocktail party” establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin–English bilinguals.more » « less
-
The way listeners perceive speech sounds is largely determined by the language(s) they were exposed to as a child. For example, native speakers of Japanese have a hard time discriminating between American English /ɹ/ and /l/, a phonetic contrast that has no equivalent in Japanese. Such effects are typically attributed to knowledge of sounds in the native language, but quantitative models of how these effects arise from linguistic knowledge are lacking. One possible source for such models is Automatic Speech Recognition (ASR) technology. We implement models based on two types of systems from the ASR literature—hidden Markov models (HMMs) and the more recent, and more accurate, neural network systems—and ask whether, in addition to showing better performance, the neural network systems also provide better models of human perception. We find that while both types of systems can account for Japanese natives’ difficulty with American English /ɹ/ and /l/, only the neural network system successfully accounts for Japanese natives’ facility with Japanese vowel length contrasts. Our work provides a new example, in the domain of speech perception, of an often observed correlation between task performance and similarity to human behavior.more » « less
An official website of the United States government

