skip to main content


Title: Why Ecological Momentary Assessment Surveys Go Incomplete: When It Happens and How It Impacts Data
Abstract Background Ecological momentary assessment (EMA) often requires respondents to complete surveys in the moment to report real-time experiences. Because EMA may seem disruptive or intrusive, respondents may not complete surveys as directed in certain circumstances. Purpose This article aims to determine the effect of environmental characteristics on the likelihood of instances where respondents do not complete EMA surveys (referred to as survey incompletion), and to estimate the impact of survey incompletion on EMA self-report data. Research Design An observational study. Study Sample Ten adults hearing aid (HA) users. Data Collection and Analysis Experienced, bilateral HA users were recruited and fit with study HAs. The study HAs were equipped with real-time data loggers, an algorithm that logged the data generated by HAs (e.g., overall sound level, environment classification, and feature status including microphone mode and amount of gain reduction). The study HAs were also connected via Bluetooth to a smartphone app, which collected the real-time data logging data as well as presented the participants with EMA surveys about their listening environments and experiences. The participants were sent out to wear the HAs and complete surveys for 1 week. Real-time data logging was triggered when participants completed surveys and when participants ignored or snoozed surveys. Data logging data were used to estimate the effect of environmental characteristics on the likelihood of survey incompletion, and to predict participants' responses to survey questions in the instances of survey incompletion. Results Across the 10 participants, 715 surveys were completed and survey incompletion occurred 228 times. Mixed effects logistic regression models indicated that survey incompletion was more likely to happen in the environments that were less quiet and contained more speech, noise, and machine sounds, and in the environments wherein directional microphones and noise reduction algorithms were enabled. The results of survey response prediction further indicated that the participants could have reported more challenging environments and more listening difficulty in the instances of survey incompletion. However, the difference in the distribution of survey responses between the observed responses and the combined observed and predicted responses was small. Conclusion The present study indicates that EMA survey incompletion occurs systematically. Although survey incompletion could bias EMA self-report data, the impact is likely to be small.  more » « less
Award ID(s):
1838830
NSF-PAR ID:
10309716
Author(s) / Creator(s):
; ; ; ; ; ; ; ;
Date Published:
Journal Name:
Journal of the American Academy of Audiology
Volume:
32
Issue:
01
ISSN:
1050-0545
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Background Ecological momentary assessment (EMA) is a methodology involving repeated surveys to collect in situ data that describe respondents' current or recent experiences and related contexts in their natural environments. Audiology literature investigating the test-retest reliability of EMA is scarce. Purpose This article examines the test-retest reliability of EMA in measuring the characteristics of listening contexts and listening experiences. Research Design An observational study. Study Sample Fifty-one older adults with hearing loss. Data Collection and Analysis The study was part of a larger study that examined the effect of hearing aid technologies. The larger study had four trial conditions and outcome was measured using a smartphone-based EMA system. After completing the four trial conditions, participants repeated one of the conditions to examine the EMA test-retest reliability. The EMA surveys contained questions that assessed listening context characteristics including talker familiarity, talker location, and noise location, as well as listening experiences including speech understanding, listening effort, loudness satisfaction, and hearing aid satisfaction. The data from multiple EMA surveys collected by each participant were aggregated in each of the test and retest conditions. Test-retest correlation on the aggregated data was then calculated for each EMA survey question to determine the reliability of EMA. Results At the group level, listening context characteristics and listening experience did not change between the test and retest conditions. The test-retest correlation varied across the EMA questions, with the highest being the questions that assessed talker location (median r = 1.0), reverberation (r = 0.89), and speech understanding (r = 0.85), and the lowest being the items that quantified noise location (median r = 0.63), talker familiarity (r = 0.46), listening effort (r = 0.61), loudness satisfaction (r = 0.60), and hearing aid satisfaction (r = 0.61). Conclusion Several EMA questions yielded appropriate test-retest reliability results. The lower test-retest correlations for some EMA survey questions were likely due to fewer surveys completed by participants and poorly designed questions. Therefore, the present study stresses the importance of using validated questions in EMA. With sufficient numbers of surveys completed by respondents and with appropriately designed survey questions, EMA could have reasonable test-retest reliability in audiology research. 
    more » « less
  2. Purpose: The goal of this study was to assess the listening behavior and social engagement of cochlear implant (CI) users and normal-hearing (NH) adults in daily life and relate these actions to objective hearing outcomes. Method: Ecological momentary assessments (EMAs) collected using a smartphone app were used to probe patterns of listening behavior in CI users and age-matched NH adults to detect differences in social engagement and listening behavior in daily life. Participants completed very short surveys every 2 hr to provide snapshots of typical, everyday listening and socializing, as well as longer, reflective surveys at the end of the day to assess listening strategies and coping behavior. Speech perception testing, with accompanying ratings of task difficulty, was also performed in a lab setting to uncover possible correlations between objective and subjective listening behavior. Results: Comparisons between speech intelligibility testing and EMA responses showed poorer performing CI users spending more time at home and less time conversing with others than higher performing CI users and their NH peers. Perception of listening difficulty was also very different for CI users and NH listeners, with CI users reporting little difficulty despite poor speech perception performance. However, both CI users and NH listeners spent most of their time in listening environments they considered “not difficult.” CI users also reported using several compensatory listening strategies, such as visual cues, whereas NH listeners did not. Conclusion: Overall, the data indicate systematic differences between how individual CI users and NH adults navigate and manipulate listening and social environments in everyday life. 
    more » « less
  3. The COVID-19 pandemic has dramatically altered family life in the United States. Over the long duration of the pandemic, parents had to adapt to shifting work conditions, virtual schooling, the closure of daycare facilities, and the stress of not only managing households without domestic and care supports but also worrying that family members may contract the novel coronavirus. Reports early in the pandemic suggest that these burdens have fallen disproportionately on mothers, creating concerns about the long-term implications of the pandemic for gender inequality and mothers’ well-being. Nevertheless, less is known about how parents’ engagement in domestic labor and paid work has changed throughout the pandemic, what factors may be driving these changes, and what the long-term consequences of the pandemic may be for the gendered division of labor and gender inequality more generally.

    The Study on U.S. Parents’ Divisions of Labor During COVID-19 (SPDLC) collects longitudinal survey data from partnered U.S. parents that can be used to assess changes in parents’ divisions of domestic labor, divisions of paid labor, and well-being throughout and after the COVID-19 pandemic. The goal of SPDLC is to understand both the short- and long-term impacts of the pandemic for the gendered division of labor, work-family issues, and broader patterns of gender inequality.

    Survey data for this study is collected using Prolifc (www.prolific.co), an opt-in online platform designed to facilitate scientific research. The sample is comprised U.S. adults who were residing with a romantic partner and at least one biological child (at the time of entry into the study). In each survey, parents answer questions about both themselves and their partners. Wave 1 of SPDLC was conducted in April 2020, and parents who participated in Wave 1 were asked about their division of labor both prior to (i.e., early March 2020) and one month after the pandemic began. Wave 2 of SPDLC was collected in November 2020. Parents who participated in Wave 1 were invited to participate again in Wave 2, and a new cohort of parents was also recruited to participate in the Wave 2 survey. Wave 3 of SPDLC was collected in October 2021. Parents who participated in either of the first two waves were invited to participate again in Wave 3, and another new cohort of parents was also recruited to participate in the Wave 3 survey. This research design (follow-up survey of panelists and new cross-section of parents at each wave) will continue through 2024, culminating in six waves of data spanning the period from March 2020 through October 2024. An estimated total of approximately 6,500 parents will be surveyed at least once throughout the duration of the study.

    SPDLC data will be released to the public two years after data is collected; Waves 1 and 2 are currently publicly available. Wave 3 will be publicly available in October 2023, with subsequent waves becoming available yearly. Data will be available to download in both SPSS (.sav) and Stata (.dta) formats, and the following data files will be available: (1) a data file for each individual wave, which contains responses from all participants in that wave of data collection, (2) a longitudinal panel data file, which contains longitudinal follow-up data from all available waves, and (3) a repeated cross-section data file, which contains the repeated cross-section data (from new respondents at each wave) from all available waves. Codebooks for each survey wave and a detailed user guide describing the data are also available. Response Rates: Of the 1,157 parents who participated in Wave 1, 828 (72%) also participated in the Wave 2 study. Presence of Common Scales: The following established scales are included in the survey:
    • Self-Efficacy, adapted from Pearlin's mastery scale (Pearlin et al., 1981) and the Rosenberg self-esteem scale (Rosenberg, 2015) and taken from the American Changing Lives Survey
    • Communication with Partner, taken from the Marriage and Relationship Survey (Lichter & Carmalt, 2009)
    • Gender Attitudes, taken from the National Survey of Families and Households (Sweet & Bumpass, 1996)
    • Depressive Symptoms (CES-D-10)
    • Stress, measured using Cohen's Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983)
    Full details about these scales and all other items included in the survey can be found in the user guide and codebook
    The second wave of the SPDLC was fielded in November 2020 in two stages. In the first stage, all parents who participated in W1 of the SPDLC and who continued to reside in the United States were re-contacted and asked to participate in a follow-up survey. The W2 survey was posted on Prolific, and messages were sent via Prolific’s messaging system to all previous participants. Multiple follow-up messages were sent in an attempt to increase response rates to the follow-up survey. Of the 1,157 respondents who completed the W1 survey, 873 at least started the W2 survey. Data quality checks were employed in line with best practices for online surveys (e.g., removing respondents who did not complete most of the survey or who did not pass the attention filters). After data quality checks, 5.2% of respondents were removed from the sample, resulting in a final sample size of 828 parents (a response rate of 72%).

    In the second stage, a new sample of parents was recruited. New parents had to meet the same sampling criteria as in W1 (be at least 18 years old, reside in the United States, reside with a romantic partner, and be a parent living with at least one biological child). Also similar to the W1 procedures, we oversampled men, Black individuals, individuals who did not complete college, and individuals who identified as politically conservative to increase sample diversity. A total of 1,207 parents participated in the W2 survey. Data quality checks led to the removal of 5.7% of the respondents, resulting in a final sample size of new respondents at Wave 2 of 1,138 parents.

    In both stages, participants were informed that the survey would take approximately 20 minutes to complete. All panelists were provided monetary compensation in line with Prolific’s compensation guidelines, which require that all participants earn above minimum wage for their time participating in studies.
    To be included in SPDLC, respondents had to meet the following sampling criteria at the time they enter the study: (a) be at least 18 years old, (b) reside in the United States, (c) reside with a romantic partner (i.e., be married or cohabiting), and (d) be a parent living with at least one biological child. Follow-up respondents must be at least 18 years old and reside in the United States, but may experience changes in relationship and resident parent statuses. Smallest Geographic Unit: U.S. State

    This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. In accordance with this license, all users of these data must give appropriate credit to the authors in any papers, presentations, books, or other works that use the data. A suggested citation to provide attribution for these data is included below:            

    Carlson, Daniel L. and Richard J. Petts. 2022. Study on U.S. Parents’ Divisions of Labor During COVID-19 User Guide: Waves 1-2.  

    To help provide estimates that are more representative of U.S. partnered parents, the SPDLC includes sampling weights. Weights can be included in statistical analyses to make estimates from the SPDLC sample representative of U.S. parents who reside with a romantic partner (married or cohabiting) and a child aged 18 or younger based on age, race/ethnicity, and gender. National estimates for the age, racial/ethnic, and gender profile of U.S. partnered parents were obtained using data from the 2020 Current Population Survey (CPS). Weights were calculated using an iterative raking method, such that the full sample in each data file matches the nationally representative CPS data in regard to the gender, age, and racial/ethnic distributions within the data. This variable is labeled CPSweightW2 in the Wave 2 dataset, and CPSweightLW2 in the longitudinal dataset (which includes Waves 1 and 2). There is not a weight variable included in the W1-W2 repeated cross-section data file.
     
    more » « less
  4. Abstract Background Ecological momentary assessment (EMA) is a methodology involving repeated surveys to collect in-situ self-reports that describe respondents' current or recent experiences. Audiology literature comparing in-situ and retrospective self-reports is scarce. Purpose To compare the sensitivity of in-situ and retrospective self-reports in detecting the outcome difference between hearing aid technologies, and to determine the association between in-situ and retrospective self-reports. Research Design An observational study. Study Sample Thirty-nine older adults with hearing loss. Data Collection and Analysis The study was part of a larger clinical trial that compared the outcomes of a prototype hearing aid (denoted as HA1) and a commercially available device (HA2). In each trial condition, participants wore hearing aids for 4 weeks. Outcomes were measured using EMA and retrospective questionnaires. To ensure that the outcome data could be directly compared, the Glasgow Hearing Aid Benefit Profile was administered as an in-situ self-report (denoted as EMA-GHABP) and as a retrospective questionnaire (retro-GHABP). Linear mixed models were used to determine if the EMA- and retro-GHABP could detect the outcome difference between HA1 and HA2. Correlation analyses were used to examine the association between EMA- and retro-GHABP. Results For the EMA-GHABP, HA2 had significantly higher (better) scores than HA1 in the GHABP subscales of benefit, residual disability, and satisfaction (p = 0.029–0.0015). In contrast, the difference in the retro-GHABP score between HA1 and HA2 was significant only in the satisfaction subscale (p = 0.0004). The correlations between the EMA- and retro-GHABP were significant in all subscales (p = 0.0004 to <0.0001). The strength of the association ranged from weak to moderate (r = 0.28–0.58). Finally, the exit interview indicated that 29 participants (74.4%) preferred HA2 over HA1. Conclusion The study suggests that in-situ self-reports collected using EMA could have a higher sensitivity than retrospective questionnaires. Therefore, EMA is worth considering in clinical trials that aim to compare the outcomes of different hearing aid technologies. The weak to moderate association between in-situ and retrospective self-reports suggests that these two types of measures assess different aspects of hearing aid outcomes. 
    more » « less
  5. Need/Motivation (e.g., goals, gaps in knowledge) The ESTEEM implemented a STEM building capacity project through students’ early access to a sustainable and innovative STEM Stepping Stones, called Micro-Internships (MI). The goal is to reap key benefits of a full-length internship and undergraduate research experiences in an abbreviated format, including access, success, degree completion, transfer, and recruiting and retaining more Latinx and underrepresented students into the STEM workforce. The MIs are designed with the goals to provide opportunities for students at a community college and HSI, with authentic STEM research and applied learning experiences (ALE), support for appropriate STEM pathway/career, preparation and confidence to succeed in STEM and engage in summer long REUs, and with improved outcomes. The MI projects are accessible early to more students and build momentum to better overcome critical obstacles to success. The MIs are shorter, flexibly scheduled throughout the year, easily accessible, and participation in multiple MI is encouraged. ESTEEM also establishes a sustainable and collaborative model, working with partners from BSCS Science Education, for MI’s mentor, training, compliance, and building capacity, with shared values and practices to maximize the improvement of student outcomes. New Knowledge (e.g., hypothesis, research questions) Research indicates that REU/internship experiences can be particularly powerful for students from Latinx and underrepresented groups in STEM. However, those experiences are difficult to access for many HSI-community college students (85% of our students hold off-campus jobs), and lack of confidence is a barrier for a majority of our students. The gap between those who can and those who cannot is the “internship access gap.” This project is at a central California Community College (CCC) and HSI, the only affordable post-secondary option in a region serving a historically underrepresented population in STEM, including 75% Hispanic, and 87% have not completed college. MI is designed to reduce inequalities inherent in the internship paradigm by providing access to professional and research skills for those underserved students. The MI has been designed to reduce barriers by offering: shorter duration (25 contact hours); flexible timing (one week to once a week over many weeks); open access/large group; and proximal location (on-campus). MI mentors participate in week-long summer workshops and ongoing monthly community of practice with the goal of co-constructing a shared vision, engaging in conversations about pedagogy and learning, and sustaining the MI program going forward. Approach (e.g., objectives/specific aims, research methodologies, and analysis) Research Question and Methodology: We want to know: How does participation in a micro-internship affect students’ interest and confidence to pursue STEM? We used a mixed-methods design triangulating quantitative Likert-style survey data with interpretive coding of open-responses to reveal themes in students’ motivations, attitudes toward STEM, and confidence. Participants: The study sampled students enrolled either part-time or full-time at the community college. Although each MI was classified within STEM, they were open to any interested student in any major. Demographically, participants self-identified as 70% Hispanic/Latinx, 13% Mixed-Race, and 42 female. Instrument: Student surveys were developed from two previously validated instruments that examine the impact of the MI intervention on student interest in STEM careers and pursuing internships/REUs. Also, the pre- and post (every e months to assess longitudinal outcomes) -surveys included relevant open response prompts. The surveys collected students’ demographics; interest, confidence, and motivation in pursuing a career in STEM; perceived obstacles; and past experiences with internships and MIs. 171 students responded to the pre-survey at the time of submission. Outcomes (e.g., preliminary findings, accomplishments to date) Because we just finished year 1, we lack at this time longitudinal data to reveal if student confidence is maintained over time and whether or not students are more likely to (i) enroll in more internships, (ii) transfer to a four-year university, or (iii) shorten the time it takes for degree attainment. For short term outcomes, students significantly Increased their confidence to continue pursuing opportunities to develop within the STEM pipeline, including full-length internships, completing STEM degrees, and applying for jobs in STEM. For example, using a 2-tailed t-test we compared means before and after the MI experience. 15 out of 16 questions that showed improvement in scores were related to student confidence to pursue STEM or perceived enjoyment of a STEM career. Finding from the free-response questions, showed that the majority of students reported enrolling in the MI to gain knowledge and experience. After the MI, 66% of students reported having gained valuable knowledge and experience, and 35% of students spoke about gaining confidence and/or momentum to pursue STEM as a career. Broader Impacts (e.g., the participation of underrepresented minorities in STEM; development of a diverse STEM workforce, enhanced infrastructure for research and education) The ESTEEM project has the potential for a transformational impact on STEM undergraduate education’s access and success for underrepresented and Latinx community college students, as well as for STEM capacity building at Hartnell College, a CCC and HSI, for students, faculty, professionals, and processes that foster research in STEM and education. Through sharing and transfer abilities of the ESTEEM model to similar institutions, the project has the potential to change the way students are served at an early and critical stage of their higher education experience at CCC, where one in every five community college student in the nation attends a CCC, over 67% of CCC students identify themselves with ethnic backgrounds that are not White, and 40 to 50% of University of California and California State University graduates in STEM started at a CCC, thus making it a key leverage point for recruiting and retaining a more diverse STEM workforce. 
    more » « less