skip to main content

Search for: All records

Creators/Authors contains: "DeBell, Matthew"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    A prior study found that mailing prepaid incentives with $5 cash visible from outside the envelope increased the response rate to a mail survey by 4 percentage points compared to cash that was not externally visible. This “visible cash effect” suggests opportunities to improve survey response at little or no cost, but many unknowns remain. Among them: Does the visible cash effect generalize to different survey modes, respondent burdens, and cash amounts? Does it differ between fresh samples and reinterview samples? Does it affect data quality or survey costs? This article examines these questions using two linked studies where incentive visibility was randomized in a large probability sample for the American National Election Studies. The first study used $10 incentives with invitations to a long web questionnaire (median 71 minutes, n = 17,849). Visible cash increased response rates in a fresh sample for both screener and extended interview response (by 6.7 and 4.8 percentage points, respectively). Visible cash did not increase the response rate in a reinterview sample where the baseline reinterview response rate was very high (72 percent). The second study used $5 incentives with invitations to a mail-back paper questionnaire (n = 8,000). Visible cash increased the response rate in a samplemore »of prior nonrespondents by 4.0 percentage points (from 31.5 to 35.5), but it did not increase the response rate in a reinterview sample where the baseline reinterview rate was very high (84 percent). In the two studies, several aspects of data quality were investigated, including speeding, non-differentiation, item nonresponse, nonserious responses, noncredible responses, sample composition, and predictive validity; no adverse effects of visible cash were detected, and sample composition improved marginally. Effects on survey costs were either negligible or resulted in net savings. Accumulated evidence now shows that visible cash can increase incentives’ effectiveness in several circumstances.

    « less
  2. A non-response follow-up study by mail in a national sample of U.S. households had five embedded experiments to test the effects of an advance mailing, alternate survey titles, 1- or 2-page questionnaire length, the inclusion or exclusion of political questions on the 1-page questionnaire, and the position of political content on the first or second page of the 2-page questionnaire. None of these design elements affected the payout of escalated postpaid incentives. Advance mailings had no effect on response rate. A short title (National Survey of Households) had a slightly higher response rate than a longer, more descriptive one (National Survey of Households, Families, and Covid-19). Political question content, whether by inclusion, exclusion, or position, had no discernable effect on response, even among prior-study non-respondents. Questionnaire length was inversely related to response: the 2-page questionnaire depressed the overall response rate by 3.7 points (58.5 compared to 54.8 percent, weighted) and depressed response for the critical sample group of prior non-respondents by 6.9 points (36.9 compared to 29.9).
  3. Most online survey questions testing political knowledge are susceptible to measurement error when participants look up the answers. This article reports five studies of methods to detect and prevent this common source of error. To detect lookups, “catch questions” are more reliable than self-reports, because many participants lie rather than admit looking up answers. Strongly worded instructions reduced lookups by about two-thirds, while the triple combination of instructions, requesting a promise not to look up answers, and adaptive feedback (asking participants who look up an answer to stop doing so) reduced the percentage of respondents looking up an answer by a further half, to 3%. For office recall knowledge items, photo-based open-ended questions eliminated lookups and had similar validity to traditional text-based versions, making them a good choice when a visual format is viable.
  4. We use survey experiments to test the validity of judicial assumptions underlying campaign finance regulation. Our evidence supports the key assumption that ‘‘appearance of corruption’’ is directly related to the monetary value of campaign contributions. Contrary to the Court’s reasoning in Buckley v. Valeo and Citizens United v. FEC, independent expenditures are more likely to elicit the appearance of corruption than direct contributions, and direct contributions well below the legal limit also create the appearance of corruption. Our findings therefore call into question key legal tenets underlying campaign finance regulation and suggest that the amounts raised by virtually every federal election campaign exceed the thresh-old required to elicit widespread public perceptions of corruption.