A prior study found that mailing prepaid incentives with $5 cash visible from outside the envelope increased the response rate to a mail survey by 4 percentage points compared to cash that was not externally visible. This “visible cash effect” suggests opportunities to improve survey response at little or no cost, but many unknowns remain. Among them: Does the visible cash effect generalize to different survey modes, respondent burdens, and cash amounts? Does it differ between fresh samples and reinterview samples? Does it affect data quality or survey costs? This article examines these questions using two linked studies where incentive visibility was randomized in a large probability sample for the American National Election Studies. The first study used $10 incentives with invitations to a long web questionnaire (median 71 minutes, n = 17,849). Visible cash increased response rates in a fresh sample for both screener and extended interview response (by 6.7 and 4.8 percentage points, respectively). Visible cash did not increase the response rate in a reinterview sample where the baseline reinterview response rate was very high (72 percent). The second study used $5 incentives with invitations to a mail-back paper questionnaire (n = 8,000). Visible cash increased the response rate in a sample of prior nonrespondents by 4.0 percentage points (from 31.5 to 35.5), but it did not increase the response rate in a reinterview sample where the baseline reinterview rate was very high (84 percent). In the two studies, several aspects of data quality were investigated, including speeding, non-differentiation, item nonresponse, nonserious responses, noncredible responses, sample composition, and predictive validity; no adverse effects of visible cash were detected, and sample composition improved marginally. Effects on survey costs were either negligible or resulted in net savings. Accumulated evidence now shows that visible cash can increase incentives’ effectiveness in several circumstances.
more » « less- Award ID(s):
- 1835022
- NSF-PAR ID:
- 10383031
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Journal of Survey Statistics and Methodology
- Volume:
- 11
- Issue:
- 5
- ISSN:
- 2325-0984
- Format(s):
- Medium: X Size: p. 991-1010
- Size(s):
- p. 991-1010
- Sponsoring Org:
- National Science Foundation
More Like this
-
A non-response follow-up study by mail in a national sample of U.S. households had five embedded experiments to test the effects of an advance mailing, alternate survey titles, 1- or 2-page questionnaire length, the inclusion or exclusion of political questions on the 1-page questionnaire, and the position of political content on the first or second page of the 2-page questionnaire. None of these design elements affected the payout of escalated postpaid incentives. Advance mailings had no effect on response rate. A short title (National Survey of Households) had a slightly higher response rate than a longer, more descriptive one (National Survey of Households, Families, and Covid-19). Political question content, whether by inclusion, exclusion, or position, had no discernable effect on response, even among prior-study non-respondents. Questionnaire length was inversely related to response: the 2-page questionnaire depressed the overall response rate by 3.7 points (58.5 compared to 54.8 percent, weighted) and depressed response for the critical sample group of prior non-respondents by 6.9 points (36.9 compared to 29.9).more » « less
-
Despite the growing popularity of digital payment transactions in the United States, most survey participation incentives are still paid through cash or check and then distributed to respondents or potential sample members via direct mail. Though survey researchers have explored alternative incentives, such as e-gift cards, for online samples, there has been no study of electronic cash incentives—specifically paid through mobile pay applications—to date. In this article, we briefly review the literature on incentives used in online surveys and then examine survey incentive payment preferences among respondents using a small, web-based survey of younger adults. Our results suggest a greater preference for cash incentives paid through mobile applications than through direct mail, further highlighting the need for more research on the efficacy of electronically-delivered monetary incentives.more » « less
-
Abstract This article leverages a five-treatment response mode experiment (paper-only, web-only, sequential web-mail, choice, and choice-plus [choice with a promised incentive for responding online]) that was conducted within a nationally representative survey. Because this survey’s sample was drawn from respondents to another nationally representative survey, we have rich frame data that includes multiple indicators of comfort using the internet for our sample members and we can compare their response behavior across two surveys. We find that the paper-only treatment yielded a lower response rate than most of the other treatments, but there were not significant differences between the response rates for the other treatments. Among our mixed-mode treatments, the sequential web-mail treatment had the highest percentage of response by web and the lowest cost per response. When focusing on the subgroups that we expected to be the least—and the most—comfortable with the internet, we found that the paper-only treatment generally performed worse than the others, even among subgroups expected not to be comfortable with the internet. We generally did not find significant differences in the effect of response mode treatment on the response rate or percentage of response by web between the subgroups who were the most and least comfortable with the internet. In terms of the consistency of response mode choice over time, our results suggest that some people respond consistently—but also that response mode preferences are weak enough that they can be influenced by the way in which the modes are offered. We ultimately recommend using a sequential web-mail design to minimize costs while still providing people who cannot or will not respond by web with another response mode option. We also find evidence that there may be a growing lack of interest in responding by paper; more research is needed in this area.
-
Abstract The fourth wave of the National Congregations Study (NCS‐IV) was conducted in 2018–2019 with a nationally representative sample of congregations from across the religious spectrum. The NCS‐IV included a fresh cross‐section of congregations generated in conjunction with the 2018 General Social Survey and a panel of congregations that participated in the third NCS wave. Data were collected via a 65‐minute interview with one key informant from 1,262 congregations. The cooperation rate was 74 percent; the conservatively calculated response rate was 69 percent. Information was gathered about multiple aspects of congregations’ social composition, structure, activities, leadership, and programming. Approximately two‐thirds of the NCS‐IV questionnaire replicates items from previous NCS waves. This introduction to the NCS‐IV symposium describes NCS‐IV methodology and special features of the new data. The three symposium articles present NCS‐IV results about congregations’ political activities, racial and ethnic composition, and worship practices.
-
Abstract High-quality survey data collection is getting more expensive to conduct because of decreasing response rates and rising data collection costs. Responsive and adaptive designs have emerged as a framework for targeting and reallocating resources during the data collection period to improve survey data collection efficiency. Here, we report on the implementation and evaluation of a responsive design experiment in the National Survey of College Graduates that optimizes the cost-quality tradeoff by minimizing a function of data collection costs and the root mean squared error of a key survey measure, self-reported salary. We used a Bayesian framework to incorporate prior information and generate predictions of estimated response propensity, self-reported salary, and data collection costs for use in our optimization rule. At three points during the data collection process, we implement the optimization rule and identify cases for which reduced effort would have minimal effect on the mean squared error (RMSE) of mean self-reported salary while allowing us to reduce data collection costs. We find that this optimization process allowed us to reduce data collection costs by nearly 10 percent, without a statistically or practically significant increase in the RMSE of mean salary or a decrease in the unweighted response rate. This experiment demonstrates the potential for these types of designs to more effectively target data collection resources to reach survey quality goals.