skip to main content


Title: Leveraging interdisciplinary perspectives to optimize auditory training for cochlear implant users
Abstract

This review examines the role of auditory training on speech adaptation for cochlear implant users. A current limitation of the existing evidence base is the failure to adequately account for wide variability in speech perception outcomes following implantation. While many preimplantation factors contribute to the variance observed in outcomes, formal auditory training has been proposed as a way to maximize speech comprehension benefits for cochlear implant users. We adopt an interdisciplinary perspective and focus on integrating the clinical rehabilitation literature with basic research examining perceptual learning of speech. We review findings on the role of auditory training for improving perception of degraded speech signals in normal hearing listeners, with emphasis on how lexically oriented training paradigms may facilitate speech comprehension when the acoustic input is diminished. We conclude with recommendations for future research that could foster translation of principles of speech learning in normal hearing listeners to aural rehabilitation protocols for cochlear implant patients.

 
more » « less
Award ID(s):
1735225
NSF-PAR ID:
10456439
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Language and Linguistics Compass
Volume:
14
Issue:
9
ISSN:
1749-818X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose: The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. Method: Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. Results: Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered “not difficult.” CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. Conclusion: Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life. 
    more » « less
  2. This study investigates the integration of word-initial fundamental frequency (F0) and voice-onset-time (VOT) in stop voicing categorization for adult listeners with normal hearing (NH) and unilateral cochlear implant (CI) recipients utilizing a bimodal hearing configuration [CI + contralateral hearing aid (HA)]. Categorization was assessed for ten adults with NH and ten adult bimodal listeners, using synthesized consonant stimuli interpolating between /ba/ and /pa/ exemplars with five-step VOT and F0 conditions. All participants demonstrated the expected categorization pattern by reporting /ba/ for shorter VOTs and /pa/ for longer VOTs, with NH listeners showing more use of VOT as a voicing cue than CI listeners in general. When VOT becomes ambiguous between voiced and voiceless stops, NH users make more use of F0 as a cue to voicing than CI listeners, and CI listeners showed greater utilization of initial F0 during voicing identification in their bimodal (CI + HA) condition than in the CI-alone condition. The results demonstrate the adjunctive benefit of acoustic hearing from the non-implanted ear for listening conditions involving spectrotemporally complex stimuli. This finding may lead to the development of a clinically feasible perceptual weighting task that could inform clinicians about bimodal efficacy and the risk-benefit profile associated with bilateral CI recommendation.

     
    more » « less
  3. Abstract

    A training method to improve speech hearing in noise has proven elusive, with most methods failing to transfer to untrained tasks. One common approach to identify potentially viable training paradigms is to make use of cross-sectional designs. For instance, the consistent finding that people who chose to avidly engage with action video games as part of their normal life also show enhanced performance on non-game visual tasks has been used as a foundation to test the causal impact of such game play via true experiments (e.g., in more translational designs). However, little work has examined the association between action video game play and untrained auditory tasks, which would speak to the possible utility of using such games to improve speech hearing in noise. To examine this possibility, 80 participants with mixed action video game experience were tested on a visual reaction time task that has reliably shown superior performance in action video game players (AVGPs) compared to non-players (≤ 5 h/week across game categories) and multi-genre video game players (> 5 h/week across game categories). Auditory cognition and perception were tested using auditory reaction time and two speech-in-noise tasks. Performance of AVGPs on the visual task replicated previous positive findings. However, no significant benefit of action video game play was found on the auditory tasks. We suggest that, while AVGPs interact meaningfully with a rich visual environment during play, they may not interact with the games’ auditory environment. These results suggest that far transfer learning during action video game play is modality-specific and that an acoustically relevant auditory environment may be needed to improve auditory probabilistic thinking.

     
    more » « less
  4. Introduction Using data collected from hearing aid users’ own hearing aids could improve the customization of hearing aid processing for different users based on the auditory environments they encounter in daily life. Prior studies characterizing hearing aid users’ auditory environments have focused on mean sound pressure levels and proportions of environments based on classifications. In this study, we extend these approaches by introducing entropy to quantify the diversity of auditory environments hearing aid users encounter. Materials and Methods Participants from 4 groups (younger listeners with normal hearing and older listeners with hearing loss from an urban or rural area) wore research hearing aids and completed ecological momentary assessments on a smartphone for 1 week. The smartphone was programmed to sample the processing state (input sound pressure level and environment classification) of the hearing aids every 10 min and deliver an ecological momentary assessment every 40 min. Entropy values for sound pressure levels, environment classifications, and ecological momentary assessment responses were calculated for each participant to quantify the diversity of auditory environments encountered over the course of the week. Entropy values between groups were compared. Group differences in entropy were compared to prior work reporting differences in mean sound pressure levels and proportions of environment classifications. Group differences in entropy measured objectively from the hearing aid data were also compared to differences in entropy measured from the self-report ecological momentary assessment data. Results Auditory environment diversity, quantified using entropy from the hearing aid data, was significantly higher for younger listeners than older listeners. Entropy measured using ecological momentary assessment was also significantly higher for younger listeners than older listeners. Discussion Using entropy, we show that younger listeners experience a greater diversity of auditory environments than older listeners. Alignment of group entropy differences with differences in sound pressure levels and hearing aid feature activation previously reported, along with alignment with ecological momentary response entropy, suggests that entropy is a valid and useful metric. We conclude that entropy is a simple and intuitive way to measure auditory environment diversity using hearing aid data. 
    more » « less
  5. Abstract

    Though listeners readily recognize speech from a variety of talkers, accommodating talker variability comes at a cost: Myriad studies have shown that listeners are slower to recognize a spoken word when there is talker variability compared with when talker is held constant. This review focuses on two possible theoretical mechanisms for the emergence of these processing penalties. One view is that multitalker processing costs arise through a resource-demanding talker accommodation process, wherein listeners compare sensory representations against hypothesized perceptual candidates and error signals are used to adjust the acoustic-to-phonetic mapping (an active control process known ascontextual tuning). An alternative proposal is that these processing costs arise because talker changes involve salient stimulus-level discontinuities that disruptauditory attention. Some recent data suggest that multitalker processing costs may be driven by both mechanisms operating over different time scales. Fully evaluating this claim requires a foundational understanding of both talker accommodation and auditory streaming; this article provides a primer on each literature and also reviews several studies that have observed multitalker processing costs. The review closes by underscoring a need for comprehensive theories of speech perception that better integrate auditory attention and by highlighting important considerations for future research in this area.

     
    more » « less