skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on June 9, 2026

Title: Socially-guided allocation of attention and the memory encoding of spoken language
It is now well established that memory representations of words are acoustically rich. Alongside this development, a related line of work has shown that the robustness of memory encoding varies widely depending on who is speaking. In this dissertation, I explore the cognitive basis of memory asymmetries at a larger linguistic level (spoken sentences), using the mechanism of socially guided attention allocation to explain how listeners dynamically shift cognitive resources based on the social characteristics of speech. This dissertation consists of three empirical studies designed to investigate the factors that pattern asymmetric memory for spoken language. In the first study, I explored specificity effects at the level of the sentence. While previous research on specificity has centralized the lexical item as the unit of study, I showed that talker-specific memory patterns are also robust at a larger linguistic level, making it likely that acoustic detail is fundamental to human speech perception more broadly. In the second study, I introduced a set of diverse talkers and showed that memory patterns vary widely within this group, and that the memorability of individual talkers is somewhat consistent across listeners. In the third study, I showed that memory behaviors do not depend merely on the speech characteristics of the talker or on the content of the sentence, but on the unique relationship between these two. Memory dramatically improved when semantic content of sentences was congruent with widely held social associations with talkers based on their speech, and this effect was particularly pronounced when listeners had a high cognitive load during encoding. These data collectively provide evidence that listeners allocate attentional resources on an ad hoc, socially guided basis. Listeners subconsciously draw on fine-grained phonetic information and social associations to dynamically adapt low-level cognitive processes while understanding spoken language and encoding it to memory. This approach positions variation in speech not as an obstacle to perception, but as an information source that humans readily recruit to aid in the seamless understanding of spoken language.  more » « less
Award ID(s):
2314753
PAR ID:
10645873
Author(s) / Creator(s):
Publisher / Repository:
Stanford University Libraries
Date Published:
Format(s):
Medium: X
Institution:
Stanford University
Sponsoring Org:
National Science Foundation
More Like this
  1. Over the past 35 years, it has been established that mental representations of language include fine-grained acoustic details stored in episodic memory. The empirical foundations of this fact were established through a series of word recognition experiments showing that participants were better at remembering words repeated by the same talker than words repeated by a different talker (talker-specificity effect). This effect has been widely replicated, but exclusively with isolated, generally monosyllabic, words as the object of study. Whether fine-grained acoustic detail plays a role in the encoding and retrieval of larger structures, such as spoken sentences, has important implications for theories of language understanding in natural communicative contexts. In this study, we extended traditional recognition memory methods to use full spoken sentences rather than individual words as stimuli. Additionally, we manipulated attention at the time of encoding in order to probe the automaticity of fine-grained acoustic encoding. Participants were more accurate for sentences repeated by the same talker than by a different talker. They were also faster and more accurate in the Full Attention than in the Divided Attention condition. The specificity effect was more pronounced for the Divided Attention than the Full Attention group. These findings provide evidence for specificity at the sentence level. They also highlight the implicit, automatic encoding of fine-grained acoustic detail and point to a central role for cognitive resource allocation in shaping memory-based language representations. 
    more » « less
  2. When listeners encounter a difficult-to-understand talker in a difficult-to-understand situation, their perceptual mechanisms can adapt, making the talker in the situation easier to understand. This study examined talker-specific perceptual adaptation experimentally by embedding speech from second-language (L2) English talkers in varying levels of noise and collecting transcriptions from first-language English listeners (ten talkers, 100 listeners per experiment). Experiments 1 and 2 demonstrated that prior experience with a L2 talker's speech presented first without noise and then with gradually increasing levels of noise facilitated recognition of that talker in loud noise. Experiment 3 tested whether adaptation is driven by tuning-in to the talker's voice and speech patterns, by examining recognition of speech-in-loud-noise following experience with the talker in quiet. Finally, experiment 4 tested whether adaptation is driven by tuning-out the background noise, by measuring speech-in-loud-noise recognition after experience with the talker in consistently loud noise. The results showed that both tuning-in to the talker and tuning-out the noise contribute to talker-specific perceptual adaptation to L2 speech-in-noise. 
    more » « less
  3. Abstract Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners’ language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a “release from masking” from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a “one-man bilingual cocktail party” selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin–English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin–English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the “cocktail party” paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The “one-man bilingual cocktail party” establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin–English bilinguals. 
    more » « less
  4. Recent work on perceptual learning for speech has suggested that while high-variability training typically results in generalization, low-variability exposure can sometimes be sufficient for cross-talker generalization. We tested predictions of a similarity-based account, according to which, generalization depends on training-test talker similarity rather than on exposure to variability. We compared perceptual adaptation to second-language (L2) speech following single- or multiple-talker training with a round-robin design in which four L2 English talkers from four different first-language (L1) backgrounds served as both training and test talkers. After exposure to 60 L2 English sentences in one training session, cross-talker/cross-accent generalization was possible (but not guaranteed) following either multiple- or single-talker training with variation across training-test talker pairings. Contrary to predictions of the similarity-based account, adaptation was not consistently better for identical than for mismatched training-test talker pairings, and generalization patterns were asymmetrical across training-test talker pairs. Acoustic analyses also revealed a dissociation between phonetic similarity and cross-talker/cross-accent generalization. Notably, variation in adaptation and generalization related to variation in training phase intelligibility. Together with prior evidence, these data suggest that perceptual learning for speech may benefit from some combination of exposure to talker variability, training-test similarity, and high training phase intelligibility. 
    more » « less
  5. We examined the neural correlates underlying the semantic processing of native- and nonnative-accented sentences, presented in quiet or embedded in multi-talker noise. Implementing a semantic violation paradigm, 36 English monolingual young adults listened to American-accented (native) and Chinese-accented (nonnative) English sentences with or without semantic anomalies, presented in quiet or embedded in multi-talker noise, while EEG was recorded. After hearing each sentence, participants verbally repeated the sentence, which was coded and scored as an offline comprehension accuracy measure. In line with earlier behavioral studies, the negative impact of background noise on sentence repetition accuracy was higher for nonnative-accented than for native-accented sentences. At the neural level, the N400 effect for semantic anomaly was larger for nativeaccented than for nonnative-accented sentences, and was also larger for sentences presented in quiet than in noise, indicating impaired lexical-semantic access when listening to nonnative-accented speech or sentences embedded in noise. No semantic N400 effect was observed for nonnative-accented sentences presented in noise. Furthermore, the frequency of neural oscillations in the alpha frequency band (an index of online cognitive listening effort) was higher when listening to sentences in noise versus in quiet, but no difference was observed across the accent conditions. Semantic anomalies presented in background noise also elicited higher theta activity, whereas processing nonnative-accented anomalies was associated with decreased theta activity. Taken together, we found that listening to nonnative accents or background noise is associated with processing challenges during online semantic access, leading to decreased comprehension accuracy. However, the underlying cognitive mechanism (e.g., associated listening efforts) might manifest differently across accented speech processing and speech in noise processing. 
    more » « less