skip to main content


Title: Participation incentives in a survey of international non-profit professionals
Elite surveys are increasingly common in political science, but how best to motivate participation in them remains poorly understood. This study compares the effect of three treatments designed to increase participation in an online survey of international non-profit professionals: a monetary reward, an altruistic appeal emphasizing the study’s benefits, and a promise to give the respondent access to the study’s results. Only the monetary incentive increased the survey response rate. It did not decrease response quality as measured in terms of straight-lining or skipped questions, although it may have produced a pool of respondents more likely to speed through the survey. The findings suggest that monetary incentives reduce total survey error even in the context of an elite survey, perhaps especially with elite populations frequently contacted by researchers. However, such incentives may not be without trade-offs in terms of how carefully respondents engage with the survey.  more » « less
Award ID(s):
1759158 1758755
NSF-PAR ID:
10400745
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Research & Politics
Volume:
9
Issue:
3
ISSN:
2053-1680
Page Range / eLocation ID:
205316802211257
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Despite the growing popularity of digital payment transactions in the United States, most survey participation incentives are still paid through cash or check and then distributed to respondents or potential sample members via direct mail. Though survey researchers have explored alternative incentives, such as e-gift cards, for online samples, there has been no study of electronic cash incentives—specifically paid through mobile pay applications—to date. In this article, we briefly review the literature on incentives used in online surveys and then examine survey incentive payment preferences among respondents using a small, web-based survey of younger adults. Our results suggest a greater preference for cash incentives paid through mobile applications than through direct mail, further highlighting the need for more research on the efficacy of electronically-delivered monetary incentives. 
    more » « less
  2. The COVID-19 pandemic has dramatically altered family life in the United States. Over the long duration of the pandemic, parents had to adapt to shifting work conditions, virtual schooling, the closure of daycare facilities, and the stress of not only managing households without domestic and care supports but also worrying that family members may contract the novel coronavirus. Reports early in the pandemic suggest that these burdens have fallen disproportionately on mothers, creating concerns about the long-term implications of the pandemic for gender inequality and mothers’ well-being. Nevertheless, less is known about how parents’ engagement in domestic labor and paid work has changed throughout the pandemic, what factors may be driving these changes, and what the long-term consequences of the pandemic may be for the gendered division of labor and gender inequality more generally.

    The Study on U.S. Parents’ Divisions of Labor During COVID-19 (SPDLC) collects longitudinal survey data from partnered U.S. parents that can be used to assess changes in parents’ divisions of domestic labor, divisions of paid labor, and well-being throughout and after the COVID-19 pandemic. The goal of SPDLC is to understand both the short- and long-term impacts of the pandemic for the gendered division of labor, work-family issues, and broader patterns of gender inequality.

    Survey data for this study is collected using Prolifc (www.prolific.co), an opt-in online platform designed to facilitate scientific research. The sample is comprised U.S. adults who were residing with a romantic partner and at least one biological child (at the time of entry into the study). In each survey, parents answer questions about both themselves and their partners. Wave 1 of SPDLC was conducted in April 2020, and parents who participated in Wave 1 were asked about their division of labor both prior to (i.e., early March 2020) and one month after the pandemic began. Wave 2 of SPDLC was collected in November 2020. Parents who participated in Wave 1 were invited to participate again in Wave 2, and a new cohort of parents was also recruited to participate in the Wave 2 survey. Wave 3 of SPDLC was collected in October 2021. Parents who participated in either of the first two waves were invited to participate again in Wave 3, and another new cohort of parents was also recruited to participate in the Wave 3 survey. This research design (follow-up survey of panelists and new cross-section of parents at each wave) will continue through 2024, culminating in six waves of data spanning the period from March 2020 through October 2024. An estimated total of approximately 6,500 parents will be surveyed at least once throughout the duration of the study.

    SPDLC data will be released to the public two years after data is collected; Waves 1 and 2 are currently publicly available. Wave 3 will be publicly available in October 2023, with subsequent waves becoming available yearly. Data will be available to download in both SPSS (.sav) and Stata (.dta) formats, and the following data files will be available: (1) a data file for each individual wave, which contains responses from all participants in that wave of data collection, (2) a longitudinal panel data file, which contains longitudinal follow-up data from all available waves, and (3) a repeated cross-section data file, which contains the repeated cross-section data (from new respondents at each wave) from all available waves. Codebooks for each survey wave and a detailed user guide describing the data are also available. Response Rates: Of the 1,157 parents who participated in Wave 1, 828 (72%) also participated in the Wave 2 study. Presence of Common Scales: The following established scales are included in the survey:
    • Self-Efficacy, adapted from Pearlin's mastery scale (Pearlin et al., 1981) and the Rosenberg self-esteem scale (Rosenberg, 2015) and taken from the American Changing Lives Survey
    • Communication with Partner, taken from the Marriage and Relationship Survey (Lichter & Carmalt, 2009)
    • Gender Attitudes, taken from the National Survey of Families and Households (Sweet & Bumpass, 1996)
    • Depressive Symptoms (CES-D-10)
    • Stress, measured using Cohen's Perceived Stress Scale (Cohen, Kamarck, & Mermelstein, 1983)
    Full details about these scales and all other items included in the survey can be found in the user guide and codebook
    The second wave of the SPDLC was fielded in November 2020 in two stages. In the first stage, all parents who participated in W1 of the SPDLC and who continued to reside in the United States were re-contacted and asked to participate in a follow-up survey. The W2 survey was posted on Prolific, and messages were sent via Prolific’s messaging system to all previous participants. Multiple follow-up messages were sent in an attempt to increase response rates to the follow-up survey. Of the 1,157 respondents who completed the W1 survey, 873 at least started the W2 survey. Data quality checks were employed in line with best practices for online surveys (e.g., removing respondents who did not complete most of the survey or who did not pass the attention filters). After data quality checks, 5.2% of respondents were removed from the sample, resulting in a final sample size of 828 parents (a response rate of 72%).

    In the second stage, a new sample of parents was recruited. New parents had to meet the same sampling criteria as in W1 (be at least 18 years old, reside in the United States, reside with a romantic partner, and be a parent living with at least one biological child). Also similar to the W1 procedures, we oversampled men, Black individuals, individuals who did not complete college, and individuals who identified as politically conservative to increase sample diversity. A total of 1,207 parents participated in the W2 survey. Data quality checks led to the removal of 5.7% of the respondents, resulting in a final sample size of new respondents at Wave 2 of 1,138 parents.

    In both stages, participants were informed that the survey would take approximately 20 minutes to complete. All panelists were provided monetary compensation in line with Prolific’s compensation guidelines, which require that all participants earn above minimum wage for their time participating in studies.
    To be included in SPDLC, respondents had to meet the following sampling criteria at the time they enter the study: (a) be at least 18 years old, (b) reside in the United States, (c) reside with a romantic partner (i.e., be married or cohabiting), and (d) be a parent living with at least one biological child. Follow-up respondents must be at least 18 years old and reside in the United States, but may experience changes in relationship and resident parent statuses. Smallest Geographic Unit: U.S. State

    This work is licensed under the Creative Commons Attribution 4.0 International License. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. In accordance with this license, all users of these data must give appropriate credit to the authors in any papers, presentations, books, or other works that use the data. A suggested citation to provide attribution for these data is included below:            

    Carlson, Daniel L. and Richard J. Petts. 2022. Study on U.S. Parents’ Divisions of Labor During COVID-19 User Guide: Waves 1-2.  

    To help provide estimates that are more representative of U.S. partnered parents, the SPDLC includes sampling weights. Weights can be included in statistical analyses to make estimates from the SPDLC sample representative of U.S. parents who reside with a romantic partner (married or cohabiting) and a child aged 18 or younger based on age, race/ethnicity, and gender. National estimates for the age, racial/ethnic, and gender profile of U.S. partnered parents were obtained using data from the 2020 Current Population Survey (CPS). Weights were calculated using an iterative raking method, such that the full sample in each data file matches the nationally representative CPS data in regard to the gender, age, and racial/ethnic distributions within the data. This variable is labeled CPSweightW2 in the Wave 2 dataset, and CPSweightLW2 in the longitudinal dataset (which includes Waves 1 and 2). There is not a weight variable included in the W1-W2 repeated cross-section data file.
     
    more » « less
  3. As renewable electricity generation continues to increase in the United States (US), considerable effort goes into matching heterogeneous supply to demand at a subhour time-step. As a result, some electric providers offer incentive-based programs for residential consumers that aim to reduce electric demand during high-demand periods. There is little research into determinants of consumer response to incentive-based programs beyond typical sociodemographic characteristics. To add to this body of literature, this paper presents the findings of a dichotomous choice contingent valuation (CV) survey targeting US ratepayers’ participation in a direct-load-control scheme utilizing a smart thermostat designed to reallocate consumer electricity demand on summer days when grid stress is high. Our results show approximately 50% of respondents are willing to participate at a median willingness-to-accept (WTA) figure of USD 9.50 (95% CI: 3.74, 15.25) per month that lasts for one summer (June through August)—or slightly less than USD 30 per annum. Participation is significantly affected by a respondent’s attitudes and preferences surrounding various environmental and institutional perspectives, but not by sociodemographic characteristics. These findings suggest utilities designing direct-load-control programs may improve participation by designing incentives specific to customers’ attitudes and preferences. 
    more » « less
  4. Abstract

    Adaptive survey designs are increasingly used by survey practitioners to counteract ongoing declines in household survey response rates and manage rising fieldwork costs. This paper reports findings from an evaluation of an early-bird incentive (EBI) experiment targeting high-effort respondents who participate in the 2019 wave of the US Panel Study of Income Dynamics. We identified a subgroup of high-effort respondents at risk of nonresponse based on their prior wave fieldwork effort and randomized them to a treatment offering an extra time-delimited monetary incentive for completing their interview within the first month of data collection (treatment group; N = 800) or the standard study incentive (control group; N = 400). In recent waves, we have found that the costs of the protracted fieldwork needed to complete interviews with high-effort cases in the form of interviewer contact attempts plus an increased incentive near the close of data collection are extremely high. By incentivizing early participation and reducing the number of interviewer contact attempts and fieldwork days to complete the interview, our goal was to manage both nonresponse and survey costs. We found that the EBI treatment increased response rates and reduced fieldwork effort and costs compared to a control group. We review several key findings and limitations, discuss their implications, and identify the next steps for future research.

     
    more » « less
  5. Response time (RT) – the time elapsing from the beginning of question reading for a given question until the start of the next question – is a potentially important indicator of data quality that can be reliably measured for all questions in a computer-administered survey using a latent timer (i.e., triggered automatically by moving on to the next question). In interviewer-administered surveys, RTs index data quality by capturing the entire length of time spent on a question–answer sequence, including interviewer question-asking behaviors and respondent question-answering behaviors. Consequently, longer RTs may indicate longer processing or interaction on the part of the interviewer, respondent, or both. RTs are an indirect measure of data quality; they do not directly measure reliability or validity, and we do not directly observe what factors lengthen the administration time. In addition, either too long or too short RTs could signal a problem (Ehlen, Schober, and Conrad 2007). However, studies that link components of RTs (interviewers’ question reading and response latencies) to interviewer and respondent behaviors that index data quality strengthen the claim that RTs indicate data quality (Bergmann and Bristle 2019; Draisma and Dijkstra 2004; Olson, Smyth, and Kirchner 2019). In general, researchers tend to consider longer RTs as signaling processing problems for the interviewer, respondent, or both (Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Olson 2013; Yan and Tourangeau 2008). Previous work demonstrates that RTs are associated with various characteristics of interviewers (where applicable), questions, and respondents in web, telephone, and face-to-face interviews (e.g., Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Tourangeau 2008). We replicate and extend this research by examining how RTs are associated with various question characteristics and several established tools for evaluating questions. We also examine whether increased interviewer experience in the study shortens RTs for questions with characteristics that impact the complexity of the interviewer’s task (i.e., interviewer instructions and parenthetical phrases). We examine these relationships in the context of a sample of racially diverse respondents who answered questions about participation in medical research and their health. 
    more » « less