Abstract

This article illustrates some effects of dynamic adaptive design in a large government survey. We present findings from the 2015 National Survey of College Graduates Adaptive Design Experiment, including results and discussion of sample representativeness, response rates, and cost. We also consider the effect of truncating data collection (examining alternative stopping rules) on these metrics. In this experiment, we monitored sample representativeness continuously and altered data collection procedures—increasing or decreasing contact effort—to improve it. Cases that were overrepresented in the achieved sample were assigned to more passive modes of data collection (web or paper) or withheld from the group of cases that received survey reminders, whereas underrepresented cases were assigned to telephone follow-ups. The findings suggest that a dynamic adaptive survey design can improve a data quality indicator (R-indicators) without increasing cost or reducing response rate. We also find that a dynamic adaptive survey design has the potential to reduce the length of the data collection period, control cost, and increase timeliness of data delivery, if sample representativeness is prioritized over increasing the survey response rate.

Authors:
;  ;
Publication Date:
NSF-PAR ID:
10104973
Journal Name:
Journal of Survey Statistics and Methodology
ISSN:
2325-0984
Publisher:
Oxford University Press
A prior study found that mailing prepaid incentives with $5 cash visible from outside the envelope increased the response rate to a mail survey by 4 percentage points compared to cash that was not externally visible. This “visible cash effect” suggests opportunities to improve survey response at little or no cost, but many unknowns remain. Among them: Does the visible cash effect generalize to different survey modes, respondent burdens, and cash amounts? Does it differ between fresh samples and reinterview samples? Does it affect data quality or survey costs? This article examines these questions using two linked studies where incentive visibility was randomized in a large probability sample for the American National Election Studies. The first study used$10 incentives with invitations to a long web questionnaire (median 71 minutes, n = 17,849). Visible cash increased response rates in a fresh sample for both screener and extended interview response (by 6.7 and 4.8 percentage points, respectively). Visible cash did not increase the response rate in a reinterview sample where the baseline reinterview response rate was very high (72 percent). The second study used \$5 incentives with invitations to a mail-back paper questionnaire (n = 8,000). Visible cash increased the response rate in a samplemore »