skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Effects of Nonresponse, Measurement, and Coverage Bias in Survey Estimates of Voting
ObjectiveThe objective is to estimate the relative contributions of nonresponse, coverage, and measurement biases in survey estimates of voting. MethodsWe survey 3,000 Boston‐area households sampled from an address‐based frame matched, when possible, to telephone numbers. A two‐phase sampling design was used to follow up nonrespondents from phone interviews with personal interviews. All cases were then linked to voting records. ResultsNonresponse, coverage, and measurement‐biased survey estimates at varying stages of the study design. Coverage error linked to missing telephone numbers biased estimates that excluded nonphone households. Overall estimates including nonphone households and nonrespondent interviews include 25 percent relative bias equally attributable to measurement and nonresponse. ConclusionBias in voting measures is not limited to measurement bias. Researchers should also assess the potential for nonresponse and coverage biases.  more » « less
Award ID(s):
1424433
PAR ID:
10453853
Author(s) / Creator(s):
 
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Social Science Quarterly
Volume:
102
Issue:
2
ISSN:
0038-4941
Page Range / eLocation ID:
p. 939-954
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ObjectiveThe objective of this study was to assess nonresponse error in telephone health survey data based on an address‐based sample. Data SourcesTelephone and in‐person interviews in Greater Boston. Study Design/Data CollectionInterviewers attempted telephone interviews at addresses that were matched to telephone numbers using questions drawn from federal health surveys. In‐person household interviews were carried out with telephone nonrespondents and at addresses without matching telephone numbers. Principal FindingsAfter adjusting for demographic differences, only eight of 15 estimates based on the telephone interviews lay within two standard errors of the estimates when data from all three groups were included. ConclusionsFor health surveys of address‐based samples, many estimates based on telephone respondents differ from the total population in ways that cannot be corrected with simple demographic adjustments. 
    more » « less
  2. null (Ed.)
    Abstract This study investigates what role, if any, nonresponse plays in inflating survey estimates of religious behavior, using a multimode survey designed to allow estimation of nonresponse bias. A sample of 3,000 Boston-area households drawn from an address-based frame was randomly divided into two subsamples, contacted by mail, and invited to participate in a survey. The first subsample was asked to complete an interactive voice response interview. The second subsample was asked to complete a survey by telephone if a number was available for the address or by personal interview if not. Finally, random samples of nonrespondents were recontacted for a personal interview. Comparison of attendance estimates from initial interviews with nonrespondent interviews within sample segments yields minor or minimal differences that are not statistically significant. Findings suggest that the mechanism generating survey nonresponse is unlikely to be a major cause of bias in religious service attendance estimates in this study. 
    more » « less
  3. null (Ed.)
    Abstract With declining response rates and challenges of using RDD sampling for telephone surveys, collecting data from address-based samples has become more attractive. Two approaches are doing telephone interviews at telephone numbers matched to addresses and asking those at sampled addresses to call into an Interactive Voice Response (IVR) system to answer questions. This study used in-person interviewing to evaluate the effects of nonresponse and problems matching telephone numbers when telephone and IVR were used as the initial modes of data collection. The survey questions were selected from major US federal surveys covering a variety of topics. Both nonresponse and, for telephone, inability to find matches result in important nonresponse error for nearly half the measures across all topics, even after adjustments to fit the known demographic characteristics of the residents. Producing credible estimates requires using supplemental data collection strategies to reduce error from nonresponse. 
    more » « less
  4. ABSTRACT Probability surveys are challenged by increasing nonresponse rates, resulting in biased statistical inference. Auxiliary information about populations can be used to reduce bias in estimation. Often continuous auxiliary variables in administrative records are first discretized before releasing to the public to avoid confidentiality breaches. This may weaken the utility of the administrative records in improving survey estimates, particularly when there is a strong relationship between continuous auxiliary information and the survey outcome. In this paper, we propose a two‐step strategy, where the confidential continuous auxiliary data in the population are first utilized to estimate the response propensity score of the survey sample by statistical agencies, which is then included in a modified population data for data users. In the second step, data users who do not have access to confidential continuous auxiliary data conduct predictive survey inference by including discretized continuous variables and the propensity score as predictors using splines in a Bayesian model. We show by simulation that the proposed method performs well, yielding more efficient estimates of population means with 95% credible intervals providing better coverage than alternative approaches. We illustrate the proposed method using the Ohio Army National Guard Mental Health Initiative (OHARNG‐MHI). The methods developed in this work are readily available in the R packageAuxSurvey. 
    more » « less
  5. Brenner, P. S. (Ed.)
    Features of the survey measurement process may affect responses from respondents in various racial, ethnic, or cultural groups in different ways. When responses from multiethnic populations are combined, such variability in responding could increase variable error or bias results. The current study examines the survey response process among Black and White respondents answering questions about trust in medical researchers and participation in medical research. Using transcriptions from telephone interviews, we code a rich set of behaviors produced by respondents that past research has shown to be associated with measurement error, including long question-answer sequences, uncodable answers, requests for repetition or clarification, affective responses, and tokens. In analysis, we test for differences between Black and White respondents in the likelihood with which behaviors occur and examine whether the behaviors vary by specific categorizations of the questions, including whether the questions are racially focused. Overall, we find that White respondents produce more behaviors that indicate cognitive processing problems for racially focused questions, which may be interpreted as demonstrating a “cultural” difference in the display of cognitive processing and interaction. Data are provided by the 2013–2014 Voices Heard Survey, a computer-assisted telephone survey designed to measure respondents’ perceptions of barriers and facilitators to participating in medical research. 
    more » « less