skip to main content


Title: Phasic pupillary responses reveal differential engagement of attentional control in bilingual spoken language processing
Abstract

Language processing is cognitively demanding, requiring attentional resources to efficiently select and extract linguistic information as utterances unfold. Previous research has associated changes in pupil size with increased attentional effort. However, it is unknown whether the behavioral ecology of speakers may differentially affect engagement of attentional resources involved in conversation. For bilinguals, such an act potentially involves competing signals in more than one language and how this competition arises may differ across communicative contexts. We examined changes in pupil size during the comprehension of unilingual and codeswitched speech in a richly-characterized bilingual sample. In a visual-world task, participants saw pairs of objects as they heard instructions to select a target image. Instructions were either unilingual or codeswitched from one language to the other. We found that only bilinguals who use each of their languages in separate communicative contexts and who have high attention ability, show differential attention to unilingual and codeswitched speech. Bilinguals for whom codeswitching is common practice process unilingual and codeswitched speech similarly, regardless of attentional skill. Taken together, these results suggest that bilinguals recruit different language control strategies for distinct communicative purposes. The interactional context of language use critically determines attentional control engagement during language processing.

 
more » « less
NSF-PAR ID:
10383786
Author(s) / Creator(s):
; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
11
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bilinguals learn to resolve conflict between their two languages and that skill has been hypothesized to create long-term adaptive changes in cognitive functioning. Yet, little is known about how bilinguals recruit cognitive control to enable efficient use of one of their languages, especially in the less skilled and more effortful second language (L2). Here we examined how real-time cognitive control engagement influences L2 sentence comprehension (i.e., conflict adaptation). We tested a group of English monolinguals and a group of L2 English speakers using a recently-developed cross-task adaptation paradigm. Stroop sequences were pseudo-randomly interleaved with a visual-world paradigm in which participants were asked to carry out spoken instructions that were either syntactically ambiguous or unambiguous. Consistent with previous research, eye-movement results showed that Stroop-related conflict improved the ability to engage correct-goal interpretations, and disengage incorrect-goal interpretations, during ambiguous instructions. Such cognitive-to-language modulations were similar in both groups, but only in the engagement piece. In the disengagement portion, the modulation emerged earlier in bilinguals than in monolinguals, suggesting group differences in attentional disengagement following cognitive control recruitment. Additionally, incorrect-goal eye-movements were modulated by individual differences in working memory, although differently for each group, suggesting an involvement of both language-specific and domain-general resources. 
    more » « less
  2. In individuals who know more than one language, the languages are always active to some degree. This has consequences for language processing, but bilinguals rarely make mistakes in language selection. A prevailing explanation is that bilingualism is supported by strong cognitive control abilities, developed through long-term practice with managing multiple languages and spilling over into more general executive functions. However, not all bilinguals are the same, and not all contexts for bilingualism provide the same support for control and regulation abilities. This paper reviews research on hearing sign–speech bimodal bilinguals who have a unique ability to use and comprehend their two languages at the same time. We discuss the role of this research in re-examining the role of cognitive control in bilingual language regulation, focusing on how results from bimodal bilingualism research relate to recent findings emphasizing the correlation of control abilities with a bilingual’s contexts of language use. Most bimodal bilingualism research has involved individuals in highly English-dominant language contexts. We offer a critical examination of how existing bimodal bilingualism findings have been interpreted, discuss the value of broadening the scope of this research and identify long-standing questions about bilingualism and L2 learning which might benefit from this perspective. 
    more » « less
  3. INTRODUCTION: Apollo-11 (A-11) was the first manned space mission to successfully bring astronauts to the moon and return them safely. Effective team based communications is required for mission specialists to work collaboratively to learn, engage, and solve complex problems. As part of NASA’s goal in assessing team and mission success, all vital speech communications between these personnel were recorded using the multi-track SoundScriber system onto analog tapes, preserving their contribution in the success of one of the greatest achievements in human history. More than +400 personnel served as mission specialists/support who communicated across 30 audio loops, resulting in +9k hours of data for A-11. To ensure success of this mission, it was necessary for teams to communicate, learn, and address problems in a timely manner. Previous research has found that compatibility of individual personalities within teams is important for effective team collaboration of those individuals. Hence, it is essential to identify each speaker’s role during an Apollo mission and analyze group communications for knowledge exchange and problem solving to achieve a common goal. Assessing and analyzing speaker roles during the mission can allow for exploring engagement analysis for multi-party speaker situations. METHOD: The UTDallas Fearless steps Apollo data is comprised of 19,000 hours (A-11,A-13,A-1) possessing unique and multiple challenges as it is characterized by severe noise and degradation as well as overlap instances over the 30 channels. For our study, we have selected a subset of 100 hours manually transcribed by professional annotators for speaker labels. The 100 hours are obtained from three mission critical events: 1. Lift-Off (25 hours) 2. Lunar-Landing (50 hours) 3. Lunar-Walking (25 hours). Five channels of interest, out of 30 channels were selected with the most speech activity, the primary speakers operating these five channels are command/owners of these channels. For our analysis, we select five speaker roles: Flight Director (FD), Capsule Communicator (CAPCOM), Guidance, Navigation and, Control (GNC), Electrical, environmental, and consumables manager (EECOM), and Network (NTWK). To track and tag individual speakers across our Fearless Steps audio dataset, we use the concept of ‘where’s Waldo’ to identify all instances of our speakers-of-interest across a cluster of other speakers. Also, to understand speaker roles of our speaker-of-interests, we use speaker duration of primary speaker vs secondary speaker and speaker turns as our metrics to determine the role of the speaker and to understand their responsibility during the three critical phases of the mission. This enables a content linking capability as well as provide a pathway to analyzing group engagement, group dynamics of people working together in an enclosed space, psychological effects, and cognitive analysis in such individuals. IMPACT: NASA’s Apollo Program stands as one of the most significant contributions to humankind. This collection opens new research options for recognizing team communication, group dynamics, and human engagement/psychology for future deep space missions. Analyzing team communications to achieve such goals would allow for the formulation of educational and training technologies for assessment of STEM knowledge, task learning, and educational feedback. Also, identifying these personnel can help pay tribute and yield personal recognition to the hundreds of notable engineers and scientist who made this feat possible. ILLUSTRATION: In this work, we propose to illustrate how a pre-trained speech/language network can be used to obtain powerful speaker embeddings needed for speaker diarization. This framework is used to build these learned embeddings to label unique speakers over sustained audio streams. To train and test our system, we will make use of Fearless Steps Apollo corpus, allowing us to effectively leverage a limited label information resource (100 hours of labeled data out of +9000 hours). Furthermore, we use the concept of 'Finding Waldo' to identify key speakers of interest (SOI) throughout the Apollo-11 mission audio across multiple channel audio streams. 
    more » « less
  4. null (Ed.)
    Abstract Speech perception involves both conceptual cues and perceptual cues. These, individually, have been shown to guide bilinguals’ speech perception; but their potential interaction has been ignored. Explicitly, bilinguals have been given perceptual cues that could be predicted by the conceptual cues. Therefore, to target the perceptual-conceptual interaction, we created a restricted range of perceptual cues that either matched, or mismatched, bilinguals’ conceptual predictions based on the language context. Specifically, we designed an active speech perception task that concurrently collected electrophysiological data from Spanish–English bilinguals and English monolinguals to address the extent to which this cue interaction uniquely affects bilinguals’ speech sound perception and allocation of attentional resources. Bilinguals’ larger MMN-N2b in the mismatched context aligns with the Predictive Coding Hypothesis to suggest that bilinguals use their diverse perceptual routines to best allocate cognitive resources to perceive speech. 
    more » « less
  5. Abstract

    Everyday environments often contain distracting competing talkers and background noise, requiring listeners to focus their attention on one acoustic source and reject others. During this auditory attention task, listeners may naturally interrupt their sustained attention and switch attended sources. The effort required to perform this attention switch has not been well studied in the context of competing continuous speech. In this work, we developed two variants of endogenous attention switching and a sustained attention control. We characterized these three experimental conditions under the context of decoding auditory attention, while simultaneously evaluating listening effort and neural markers of spatial‐audio cues. A least‐squares, electroencephalography (EEG)‐based, attention decoding algorithm was implemented across all conditions. It achieved an accuracy of 69.4% and 64.0% when computed over nonoverlapping 10 and 5‐s correlation windows, respectively. Both decoders illustrated smooth transitions in the attended talker prediction through switches at approximately half of the analysis window size (e.g., the mean lag taken across the two switch conditions was 2.2 s when the 5‐s correlation window was used). Expended listening effort, as measured by simultaneous EEG and pupillometry, was also a strong indicator of whether the listeners sustained attention or performed an endogenous attention switch (peak pupil diameter measure [ ] and minimum parietal alpha power measure [ ]). We additionally found evidence of talker spatial cues in the form of centrotemporal alpha power lateralization ( ). These results suggest that listener effort and spatial cues may be promising features to pursue in a decoding context, in addition to speech‐based features.

     
    more » « less