Although the application of deep learning to automatic speech recognition (ASR) has resulted in dramatic reductions in word error rate for languages with abundant training data, ASR for languages with few resources has yet to benefit from deep learning to the same extent. In this paper, we investigate various methods of acoustic modeling and data augmentation with the goal of improving the accuracy of a deep learning ASR framework for a low-resource language with a high baseline word error rate. We compare several methods of generating synthetic acoustic training data via voice transformation and signal distortion, and we explore several strategies for integrating this data into the acoustic training pipeline. We evaluate our methods on an indigenous language of North America with minimal training resources. We show that training initially via transfer learning from an existing high-resource language acoustic model, refining weights using a heavily concentrated synthetic dataset, and finally fine-tuning to the target language using limited synthetic data reduces WER by 15% over just transfer learning using deep recurrent methods. Further, we show improvements over traditional frameworks by 19% using a similar multistage training with deep convolutional approaches.
more »
« less
Adaptive Plasticity Under Adverse Listening Conditions is Disrupted in Developmental Dyslexia
Abstract Objective: Acoustic distortions to the speech signal impair spoken language recognition, but healthy listeners exhibit adaptive plasticity consistent with rapid adjustments in how the distorted speech input maps to speech representations, perhaps through engagement of supervised error-driven learning. This puts adaptive plasticity in speech perception in an interesting position with regard to developmental dyslexia inasmuch as dyslexia impacts speech processing and may involve dysfunction in neurobiological systems hypothesized to be involved in adaptive plasticity. Method: Here, we examined typical young adult listeners ( N = 17), and those with dyslexia ( N = 16), as they reported the identity of native-language monosyllabic spoken words to which signal processing had been applied to create a systematic acoustic distortion. During training, all participants experienced incremental signal distortion increases to mildly distorted speech along with orthographic and auditory feedback indicating word identity following response across a brief, 250-trial training block. During pretest and posttest phases, no feedback was provided to participants. Results: Word recognition across severely distorted speech was poor at pretest and equivalent across groups. Training led to improved word recognition for the most severely distorted speech at posttest, with evidence that adaptive plasticity generalized to support recognition of new tokens not previously experienced under distortion. However, training-related recognition gains for listeners with dyslexia were significantly less robust than for control listeners. Conclusions: Less efficient adaptive plasticity to speech distortions may impact the ability of individuals with dyslexia to deal with variability arising from sources like acoustic noise and foreign-accented speech.
more »
« less
- Award ID(s):
- 1655126
- PAR ID:
- 10319177
- Date Published:
- Journal Name:
- Journal of the International Neuropsychological Society
- Volume:
- 27
- Issue:
- 1
- ISSN:
- 1355-6177
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners’ language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a “release from masking” from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a “one-man bilingual cocktail party” selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin–English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin–English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the “cocktail party” paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The “one-man bilingual cocktail party” establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin–English bilinguals.more » « less
-
This study investigates whether short-term perceptual training can enhance Seoul-Korean listeners’ use of English lexical stress in spoken word recognition. Unlike English, Seoul Korean does not have lexical stress (or lexical pitch accents/tones). Seoul-Korean speakers at a high-intermediate English proficiency completed a visual-world eye-tracking experiment adapted from Connell et al. (2018) (pre-/post-test). The experiment tested whether pitch in the target stimulus (accented versus unaccented first syllable) and vowel quality in the lexical competitor (reduced versus full first vowel) modulated fixations to the target word (e.g., PARrot; ARson) over the competitor word (e.g., paRADE or PARish; arCHIVE or ARcade). In the training (eight 30-min sessions over eight days), participants heard English lexical-stress minimal pairs uttered by four talkers (high variability) or one talker (low variability), categorized them as noun (first-syllable stress) or verb (second-syllable stress), and received accuracy feedback. The results showed that neither training increased target-over-competitor fixation proportions. Crucially, the same training had been found to improve Seoul- Korean listeners’ recall of English words differing in lexical stress (Tremblay et al., 2022) and their weighting of acoustic cues to English lexical stress (Tremblay et al., 2023). These results suggest that short-term perceptual training has a limited effect on target-over-competitor word activation.more » « less
-
Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation. Which acoustic features of speech explain this substantial variation in intelligibility? Current approaches align speech to text to extract a small set of pre-defined spectro-temporal properties from specific sounds in particular words. However, variation in these properties leaves much cross-talker variation in intelligibility unexplained. We examine an alternative approach utilizing a perceptual similarity space acquired using self-supervised learning. This approach encodes distinctions between speech samples without requiring pre-defined acoustic features or speech-to-text alignment. We show that L2 English speech samples are less tightly clustered in the space than L1 samples reflecting variability in English proficiency among L2 talkers. Critically, distances in this similarity space are perceptually meaningful: L1 English listeners have lower recognition accuracy for L2 speakers whose speech is more distant in the space from L1 speech. These results indicate that perceptual similarity may form the basis for an entirely new speech and language analysis approach.more » « less
-
Documenting endangered languages supports the historical preservation of diverse cultures. Automatic speech recognition (ASR), while potentially very useful for this task, has been underutilized for language documentation due to the challenges inherent in building robust models from extremely limited audio and text training resources. In this paper, we explore the utility of supplementing existing training resources using synthetic data, with a focus on Seneca, a morphologically complex endangered language of North America. We use transfer learning to train acoustic models using both the small amount of available acoustic training data and artificially distorted copies of that data. We then supplement the language model training data with verb forms generated by rule and sentences produced by an LSTM trained on the available text data. The addition of synthetic data yields reductions in word error rate, demonstrating the promise of data augmentation for this task.more » « less