skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2219843

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. When listeners encounter a difficult-to-understand talker in a difficult-to-understand situation, their perceptual mechanisms can adapt, making the talker in the situation easier to understand. This study examined talker-specific perceptual adaptation experimentally by embedding speech from second-language (L2) English talkers in varying levels of noise and collecting transcriptions from first-language English listeners (ten talkers, 100 listeners per experiment). Experiments 1 and 2 demonstrated that prior experience with a L2 talker's speech presented first without noise and then with gradually increasing levels of noise facilitated recognition of that talker in loud noise. Experiment 3 tested whether adaptation is driven by tuning-in to the talker's voice and speech patterns, by examining recognition of speech-in-loud-noise following experience with the talker in quiet. Finally, experiment 4 tested whether adaptation is driven by tuning-out the background noise, by measuring speech-in-loud-noise recognition after experience with the talker in consistently loud noise. The results showed that both tuning-in to the talker and tuning-out the noise contribute to talker-specific perceptual adaptation to L2 speech-in-noise. 
    more » « less
  2. Free, publicly-accessible full text available June 1, 2026
  3. Free, publicly-accessible full text available February 10, 2026
  4. Speech recognition by both humans and machines frequently fails in non-optimal yet common situations. For example, word recognition error rates for second-language (L2) speech can be high, especially under conditions involving background noise. At the same time, both human and machine speech recognition sometimes shows remarkable robustness against signal- and noise-related degradation. Which acoustic features of speech explain this substantial variation in intelligibility? Current approaches align speech to text to extract a small set of pre-defined spectro-temporal properties from specific sounds in particular words. However, variation in these properties leaves much cross-talker variation in intelligibility unexplained. We examine an alternative approach utilizing a perceptual similarity space acquired using self-supervised learning. This approach encodes distinctions between speech samples without requiring pre-defined acoustic features or speech-to-text alignment. We show that L2 English speech samples are less tightly clustered in the space than L1 samples reflecting variability in English proficiency among L2 talkers. Critically, distances in this similarity space are perceptually meaningful: L1 English listeners have lower recognition accuracy for L2 speakers whose speech is more distant in the space from L1 speech. These results indicate that perceptual similarity may form the basis for an entirely new speech and language analysis approach. 
    more » « less
  5. Measuring how well human listeners recognize speech under varying environmental conditions (speech intelligibility) is a challenge for theoretical, technological, and clinical approaches to speech communication. The current gold standard—human transcription—is time- and resource-intensive. Recent advances in automatic speech recognition (ASR) systems raise the possibility of automating intelligibility measurement. This study tested 4 state-of-the-art ASR systems with second language speech-in-noise and found that one, whisper, performed at or above human listener accuracy. However, the content of whisper's responses diverged substantially from human responses, especially at lower signal-to-noise ratios, suggesting both opportunities and limitations for ASR--based speech intelligibility modeling. 
    more » « less