skip to main content


Title: Why Ecological Momentary Assessment Surveys Go Incomplete: When It Happens and How It Impacts Data
Abstract Background Ecological momentary assessment (EMA) often requires respondents to complete surveys in the moment to report real-time experiences. Because EMA may seem disruptive or intrusive, respondents may not complete surveys as directed in certain circumstances. Purpose This article aims to determine the effect of environmental characteristics on the likelihood of instances where respondents do not complete EMA surveys (referred to as survey incompletion), and to estimate the impact of survey incompletion on EMA self-report data. Research Design An observational study. Study Sample Ten adults hearing aid (HA) users. Data Collection and Analysis Experienced, bilateral HA users were recruited and fit with study HAs. The study HAs were equipped with real-time data loggers, an algorithm that logged the data generated by HAs (e.g., overall sound level, environment classification, and feature status including microphone mode and amount of gain reduction). The study HAs were also connected via Bluetooth to a smartphone app, which collected the real-time data logging data as well as presented the participants with EMA surveys about their listening environments and experiences. The participants were sent out to wear the HAs and complete surveys for 1 week. Real-time data logging was triggered when participants completed surveys and when participants ignored or snoozed surveys. Data logging data were used to estimate the effect of environmental characteristics on the likelihood of survey incompletion, and to predict participants' responses to survey questions in the instances of survey incompletion. Results Across the 10 participants, 715 surveys were completed and survey incompletion occurred 228 times. Mixed effects logistic regression models indicated that survey incompletion was more likely to happen in the environments that were less quiet and contained more speech, noise, and machine sounds, and in the environments wherein directional microphones and noise reduction algorithms were enabled. The results of survey response prediction further indicated that the participants could have reported more challenging environments and more listening difficulty in the instances of survey incompletion. However, the difference in the distribution of survey responses between the observed responses and the combined observed and predicted responses was small. Conclusion The present study indicates that EMA survey incompletion occurs systematically. Although survey incompletion could bias EMA self-report data, the impact is likely to be small.  more » « less
Award ID(s):
1838830
NSF-PAR ID:
10309716
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of the American Academy of Audiology
Volume:
32
Issue:
01
ISSN:
1050-0545
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Background Ecological momentary assessment (EMA) is a methodology involving repeated surveys to collect in situ data that describe respondents' current or recent experiences and related contexts in their natural environments. Audiology literature investigating the test-retest reliability of EMA is scarce. Purpose This article examines the test-retest reliability of EMA in measuring the characteristics of listening contexts and listening experiences. Research Design An observational study. Study Sample Fifty-one older adults with hearing loss. Data Collection and Analysis The study was part of a larger study that examined the effect of hearing aid technologies. The larger study had four trial conditions and outcome was measured using a smartphone-based EMA system. After completing the four trial conditions, participants repeated one of the conditions to examine the EMA test-retest reliability. The EMA surveys contained questions that assessed listening context characteristics including talker familiarity, talker location, and noise location, as well as listening experiences including speech understanding, listening effort, loudness satisfaction, and hearing aid satisfaction. The data from multiple EMA surveys collected by each participant were aggregated in each of the test and retest conditions. Test-retest correlation on the aggregated data was then calculated for each EMA survey question to determine the reliability of EMA. Results At the group level, listening context characteristics and listening experience did not change between the test and retest conditions. The test-retest correlation varied across the EMA questions, with the highest being the questions that assessed talker location (median r = 1.0), reverberation (r = 0.89), and speech understanding (r = 0.85), and the lowest being the items that quantified noise location (median r = 0.63), talker familiarity (r = 0.46), listening effort (r = 0.61), loudness satisfaction (r = 0.60), and hearing aid satisfaction (r = 0.61). Conclusion Several EMA questions yielded appropriate test-retest reliability results. The lower test-retest correlations for some EMA survey questions were likely due to fewer surveys completed by participants and poorly designed questions. Therefore, the present study stresses the importance of using validated questions in EMA. With sufficient numbers of surveys completed by respondents and with appropriately designed survey questions, EMA could have reasonable test-retest reliability in audiology research. 
    more » « less
  2. Purpose: The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. Method: Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. Results: Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered “not difficult.” CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. Conclusion: Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life. 
    more » « less
  3. Abstract Background Ecological momentary assessment (EMA) is a methodology involving repeated surveys to collect in-situ self-reports that describe respondents' current or recent experiences. Audiology literature comparing in-situ and retrospective self-reports is scarce. Purpose To compare the sensitivity of in-situ and retrospective self-reports in detecting the outcome difference between hearing aid technologies, and to determine the association between in-situ and retrospective self-reports. Research Design An observational study. Study Sample Thirty-nine older adults with hearing loss. Data Collection and Analysis The study was part of a larger clinical trial that compared the outcomes of a prototype hearing aid (denoted as HA1) and a commercially available device (HA2). In each trial condition, participants wore hearing aids for 4 weeks. Outcomes were measured using EMA and retrospective questionnaires. To ensure that the outcome data could be directly compared, the Glasgow Hearing Aid Benefit Profile was administered as an in-situ self-report (denoted as EMA-GHABP) and as a retrospective questionnaire (retro-GHABP). Linear mixed models were used to determine if the EMA- and retro-GHABP could detect the outcome difference between HA1 and HA2. Correlation analyses were used to examine the association between EMA- and retro-GHABP. Results For the EMA-GHABP, HA2 had significantly higher (better) scores than HA1 in the GHABP subscales of benefit, residual disability, and satisfaction (p = 0.029–0.0015). In contrast, the difference in the retro-GHABP score between HA1 and HA2 was significant only in the satisfaction subscale (p = 0.0004). The correlations between the EMA- and retro-GHABP were significant in all subscales (p = 0.0004 to <0.0001). The strength of the association ranged from weak to moderate (r = 0.28–0.58). Finally, the exit interview indicated that 29 participants (74.4%) preferred HA2 over HA1. Conclusion The study suggests that in-situ self-reports collected using EMA could have a higher sensitivity than retrospective questionnaires. Therefore, EMA is worth considering in clinical trials that aim to compare the outcomes of different hearing aid technologies. The weak to moderate association between in-situ and retrospective self-reports suggests that these two types of measures assess different aspects of hearing aid outcomes. 
    more » « less
  4. Background Chatbots are being piloted to draft responses to patient questions, but patients’ ability to distinguish between provider and chatbot responses and patients’ trust in chatbots’ functions are not well established. Objective This study aimed to assess the feasibility of using ChatGPT (Chat Generative Pre-trained Transformer) or a similar artificial intelligence–based chatbot for patient-provider communication. Methods A survey study was conducted in January 2023. Ten representative, nonadministrative patient-provider interactions were extracted from the electronic health record. Patients’ questions were entered into ChatGPT with a request for the chatbot to respond using approximately the same word count as the human provider’s response. In the survey, each patient question was followed by a provider- or ChatGPT-generated response. Participants were informed that 5 responses were provider generated and 5 were chatbot generated. Participants were asked—and incentivized financially—to correctly identify the response source. Participants were also asked about their trust in chatbots’ functions in patient-provider communication, using a Likert scale from 1-5. Results A US-representative sample of 430 study participants aged 18 and older were recruited on Prolific, a crowdsourcing platform for academic studies. In all, 426 participants filled out the full survey. After removing participants who spent less than 3 minutes on the survey, 392 respondents remained. Overall, 53.3% (209/392) of respondents analyzed were women, and the average age was 47.1 (range 18-91) years. The correct classification of responses ranged between 49% (192/392) to 85.7% (336/392) for different questions. On average, chatbot responses were identified correctly in 65.5% (1284/1960) of the cases, and human provider responses were identified correctly in 65.1% (1276/1960) of the cases. On average, responses toward patients’ trust in chatbots’ functions were weakly positive (mean Likert score 3.4 out of 5), with lower trust as the health-related complexity of the task in the questions increased. Conclusions ChatGPT responses to patient questions were weakly distinguishable from provider responses. Laypeople appear to trust the use of chatbots to answer lower-risk health questions. It is important to continue studying patient-chatbot interaction as chatbots move from administrative to more clinical roles in health care. 
    more » « less
  5. Background To succeed in engineering careers, students must be able to create and apply models to certain problems. The different types of models include physical, mathematical, computational, graphical, and financial, which are used both in academics, research, and industry. However, many students struggle to define, create, and apply relevant models in their engineering courses. Purpose (Research Questions) The research questions investigated in this study are: (1) What types of models do engineering students identify before and after completing a first-year engineering course? (2) How do students’ responses compare across different courses (a graphical communications course - EGR 120 and a programming course - EGR 115), and sections? Design/Methods The data used for this study were collected in two introductory first-year engineering courses offered during Fall 2019, EGR 115 and EGR 120. Students’ responses to a survey about modeling were qualitatively analyzed. The survey was given at the beginning and the end of the courses. The data analyzed consisted of 560 pre and post surveys for EGR 115 and 384 pre and post surveys for EGR 120. Results Once the analysis is complete, we are hoping to find that the students can better define and apply models in their engineering courses after they have completed the EGR 115 and/or EGR 120 courses. 
    more » « less