skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on December 2, 2025

Title: Empatica E4 Assessment of Child Physiological Measures of Listening Effort During Remote and In-Person Communication
Purpose:Telepractice is a growing service model that delivers aural rehabilitation to deaf and hard-of hearing children via telecommunications technology. Despite known benefits of telepractice, this delivery approach may increase patients' listening effort (LE) characterized as an allocation of cognitive resources toward an auditory task. The study tested techniques for collecting physiological measures of LE in normal-hearing (NH) children during remote (referred to as tele-) and in-person communication using the wearable Empatica E4 wristband. Method:Participants were 10 children (age range: 9–12 years old) who came to two tele- and two in-person weekly sessions, order counterbalanced. During each session, the children heard a short passage read by the clinical provider, completed an auditory passage comprehension task, and self-rated their effort as a part of the larger study. Measures of electrodermal activity and blood volume pulse amplitude were collected from the child E4 wristband. Results:No differences in child subjective, physiological measures of LE or passage comprehension scores were found between in-person sessions and telesessions. However, an effect of treatment duration on subjective and physiological measures of LE was identified. Children self-reported a significant increase in LE over time. However, their physiological measures demonstrated a trend indicating a decrease in LE. A significant association between subjective measures and the passage comprehension task was found suggesting that those children who reported more effort demonstrated a higher proportion of correct responses. Conclusions:The study demonstrated the feasibility of collection of physiological measures of LE in NH children during remote and in-person communication using the E4 wristband. The results suggest that measures of LE are multidimensional and may reflect different sources of, or cognitive responses to, increased listening demand. Supplemental Material:https://doi.org/10.23641/asha.27122064  more » « less
Award ID(s):
1838808
PAR ID:
10565372
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
ASA
Date Published:
Journal Name:
American Journal of Audiology
Volume:
33
Issue:
4
ISSN:
1059-0889
Page Range / eLocation ID:
1331 to 1340
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Purpose:We examined the neurocognitive bases of lexical morphology in children of varied reading abilities to understand the role of meaning-based skills in learning to read with dyslexia. Method:Children completed auditory morphological and phonological awareness tasks during functional near-infrared spectroscopy neuroimaging. We first examined the relation between lexical morphology and phonological processes in typically developing readers (Study 1,N= 66,Mage= 8.39), followed by a more focal inquiry into lexical morphology processes in dyslexia (Study 2,N= 50,Mage= 8.62). Results:Typical readers exhibited stronger engagement of language neurocircuitry during the morphology task relative to the phonology task, suggesting that morphological analyses involve synthesizing multiple components of sublexical processing. This effect was stronger for more analytically complex derivational affixes (like + ly) than more semantically transparent free base morphemes (snow + man). In contrast, children with dyslexia exhibited stronger activation during the free base condition relative to derivational affix condition. Taken together, the findings suggest that although children with dyslexia may struggle with derivational morphology, they may also use free base morphemes' semantic information to boost word recognition. Conclusion:This study informs literacy theories by identifying an interaction between reading ability, word structure, and how the developing brain learns to recognize words in speech and print. Supplemental Material:https://doi.org/10.23641/asha.25944949 
    more » « less
  2. Purpose:We examined which measures of complexity are most informative when studying language produced in interaction. Specifically, using these measures, we explored whether native and nonnative speakers modified the higher level properties of their production beyond the acoustic–phonetic level based on the language background of their conversation partner. Method:Using a subset of production data from the Wildcat Corpus that used Diapix, an interactive picture matching task, to elicit production, we compared English language production at the dyad and individual level across three different pair types: eight native pairs (English–English), eight mixed pairs (four English–Chinese and four English–Korean), and eight nonnative pairs (four Chinese–Chinese and four Korean–Korean). Results:At both the dyad and individual levels, native speakers produced longer and more clausally dense speech. They also produced fewer silent pauses and fewer linguistic mazes relative to nonnative speakers. Speakers did not modify their production based on the language background of their interlocutor. Conclusions:The current study examines higher level properties of language production in true interaction. Our results suggest that speakers' productions were determined by their own language background and were independent of that of their interlocutor. Furthermore, these demonstrated promise for capturing syntactic characteristics of language produced in true dialogue. Supplemental Material:https://doi.org/10.23641/asha.24712956 
    more » « less
  3. Purpose:Rhyme increases the phonological similarity of phrases individuals hear and enhances recall from working memory. This study explores whether rhyme aids word learning and examines the underlying neural mechanisms through which rhyme facilitates word learning. Method:Fifty-seven adults completed a word learning task where they were exposed to 15 nonwords (NWs), four times each, in the sentence-final position as their electroencephalogram was recorded. Participants were randomly assigned to either a Rhyme or No-Rhyme group, where the NWs rhymed or did not rhyme with a prime real word in the sentence, respectively, during the exposure phase. Subjects were then tested on their recognition of the NWs. Results:Behavioral accuracy on the NW recognition task did not differ between the Rhyme and No-Rhyme groups; however, the Rhyme group showed an enhanced bilateral P2 in the early exposure phase, which has been linked to increased attention to the phonological and semantic features of speech. By contrast, the No-Rhyme group showed an attenuated N400 amplitude at left centroparietal sites in the later exposure phase, related to semantic retrieval of NWs. Conclusion:These findings indicate that rhyme may facilitate encoding of word meaning via attention to both phonology and semantic information; however, the absence of rhyme does not hinder adults' ability to map meaning to novel words. Supplemental Material:https://doi.org/10.23641/asha.29737478 
    more » « less
  4. For social animals that communicate acoustically, hearing loss and social isolation are factors that independently influence social behavior. In human subjects, hearing loss may also contribute to objective and subjective measures of social isolation. Although the behavioral relationship between hearing loss and social isolation is evident, there is little understanding of their interdependence at the level of neural systems. Separate lines of research have shown that social isolation and hearing loss independently target the serotonergic system in the rodent brain. These two factors affect both presynaptic and postsynaptic measures of serotonergic anatomy and function, highlighting the sensitivity of serotonergic pathways to both types of insult. The effects of deficits in both acoustic and social inputs are seen not only within the auditory system, but also in other brain regions, suggesting relatively extensive effects of these deficits on serotonergic regulatory systems. Serotonin plays a much-studied role in depression and anxiety, and may also influence several aspects of auditory cognition, including auditory attention and understanding speech in challenging listening conditions. These commonalities suggest that serotonergic pathways are worthy of further exploration as potential intervening mechanisms between the related conditions of hearing loss and social isolation, and the affective and cognitive dysfunctions that follow. 
    more » « less
  5. Robots can use auditory, visual, or haptic interfaces to convey information to human users. The way these interfaces select signals is typically pre-defined by the designer: for instance, a haptic wristband might vibrate when the robot is moving and squeeze when the robot stops. But different people interpret the same signals in different ways, so that what makes sense to one person might be confusing or unintuitive to another. In this paper we introduce a unified algorithmic formalism for learningco-adaptiveinterfaces fromscratch. Our method does not need to know the human’s task (i.e., what the human is using these signals for). Instead, our insight is that interpretable interfaces should select signals that maximizecorrelationbetween the human’s actions and the information the interface is trying to convey. Applying this insight we develop LIMIT: Learning Interfaces to Maximize Information Transfer. LIMIT optimizes a tractable, real-time proxy of information gain in continuous spaces. The first time a person works with our system the signals may appear random; but over repeated interactions the interface learns a one-to-one mapping between displayed signals and human responses. Our resulting approach is both personalized to the current user and not tied to any specific interface modality. We compare LIMIT to state-of-the-art baselines across controlled simulations, an online survey, and an in-person user study with auditory, visual, and haptic interfaces. Overall, our results suggest that LIMIT learns interfaces that enable users to complete the task more quickly and efficiently, and users subjectively prefer LIMIT to the alternatives. See videos here:https://youtu.be/IvQ3TM1_2fA. 
    more » « less