skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The Effects of Nonresponse and Sampling Omissions on Estimates on Various Topics in Federal Surveys: Telephone and IVR Surveys of Address-Based Samples
Abstract With declining response rates and challenges of using RDD sampling for telephone surveys, collecting data from address-based samples has become more attractive. Two approaches are doing telephone interviews at telephone numbers matched to addresses and asking those at sampled addresses to call into an Interactive Voice Response (IVR) system to answer questions. This study used in-person interviewing to evaluate the effects of nonresponse and problems matching telephone numbers when telephone and IVR were used as the initial modes of data collection. The survey questions were selected from major US federal surveys covering a variety of topics. Both nonresponse and, for telephone, inability to find matches result in important nonresponse error for nearly half the measures across all topics, even after adjustments to fit the known demographic characteristics of the residents. Producing credible estimates requires using supplemental data collection strategies to reduce error from nonresponse.  more » « less
Award ID(s):
1424433
PAR ID:
10253107
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Journal of Official Statistics
Volume:
36
Issue:
3
ISSN:
2001-7367
Page Range / eLocation ID:
631 to 645
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ObjectiveThe objective of this study was to assess nonresponse error in telephone health survey data based on an address‐based sample. Data SourcesTelephone and in‐person interviews in Greater Boston. Study Design/Data CollectionInterviewers attempted telephone interviews at addresses that were matched to telephone numbers using questions drawn from federal health surveys. In‐person household interviews were carried out with telephone nonrespondents and at addresses without matching telephone numbers. Principal FindingsAfter adjusting for demographic differences, only eight of 15 estimates based on the telephone interviews lay within two standard errors of the estimates when data from all three groups were included. ConclusionsFor health surveys of address‐based samples, many estimates based on telephone respondents differ from the total population in ways that cannot be corrected with simple demographic adjustments. 
    more » « less
  2. ObjectiveThe objective is to estimate the relative contributions of nonresponse, coverage, and measurement biases in survey estimates of voting. MethodsWe survey 3,000 Boston‐area households sampled from an address‐based frame matched, when possible, to telephone numbers. A two‐phase sampling design was used to follow up nonrespondents from phone interviews with personal interviews. All cases were then linked to voting records. ResultsNonresponse, coverage, and measurement‐biased survey estimates at varying stages of the study design. Coverage error linked to missing telephone numbers biased estimates that excluded nonphone households. Overall estimates including nonphone households and nonrespondent interviews include 25 percent relative bias equally attributable to measurement and nonresponse. ConclusionBias in voting measures is not limited to measurement bias. Researchers should also assess the potential for nonresponse and coverage biases. 
    more » « less
  3. null (Ed.)
    Abstract This study investigates what role, if any, nonresponse plays in inflating survey estimates of religious behavior, using a multimode survey designed to allow estimation of nonresponse bias. A sample of 3,000 Boston-area households drawn from an address-based frame was randomly divided into two subsamples, contacted by mail, and invited to participate in a survey. The first subsample was asked to complete an interactive voice response interview. The second subsample was asked to complete a survey by telephone if a number was available for the address or by personal interview if not. Finally, random samples of nonrespondents were recontacted for a personal interview. Comparison of attendance estimates from initial interviews with nonrespondent interviews within sample segments yields minor or minimal differences that are not statistically significant. Findings suggest that the mechanism generating survey nonresponse is unlikely to be a major cause of bias in religious service attendance estimates in this study. 
    more » « less
  4. Abstract Background Ecological momentary assessment (EMA) often requires respondents to complete surveys in the moment to report real-time experiences. Because EMA may seem disruptive or intrusive, respondents may not complete surveys as directed in certain circumstances. Purpose This article aims to determine the effect of environmental characteristics on the likelihood of instances where respondents do not complete EMA surveys (referred to as survey incompletion), and to estimate the impact of survey incompletion on EMA self-report data. Research Design An observational study. Study Sample Ten adults hearing aid (HA) users. Data Collection and Analysis Experienced, bilateral HA users were recruited and fit with study HAs. The study HAs were equipped with real-time data loggers, an algorithm that logged the data generated by HAs (e.g., overall sound level, environment classification, and feature status including microphone mode and amount of gain reduction). The study HAs were also connected via Bluetooth to a smartphone app, which collected the real-time data logging data as well as presented the participants with EMA surveys about their listening environments and experiences. The participants were sent out to wear the HAs and complete surveys for 1 week. Real-time data logging was triggered when participants completed surveys and when participants ignored or snoozed surveys. Data logging data were used to estimate the effect of environmental characteristics on the likelihood of survey incompletion, and to predict participants' responses to survey questions in the instances of survey incompletion. Results Across the 10 participants, 715 surveys were completed and survey incompletion occurred 228 times. Mixed effects logistic regression models indicated that survey incompletion was more likely to happen in the environments that were less quiet and contained more speech, noise, and machine sounds, and in the environments wherein directional microphones and noise reduction algorithms were enabled. The results of survey response prediction further indicated that the participants could have reported more challenging environments and more listening difficulty in the instances of survey incompletion. However, the difference in the distribution of survey responses between the observed responses and the combined observed and predicted responses was small. Conclusion The present study indicates that EMA survey incompletion occurs systematically. Although survey incompletion could bias EMA self-report data, the impact is likely to be small. 
    more » « less
  5. Brenner, P. S. (Ed.)
    Features of the survey measurement process may affect responses from respondents in various racial, ethnic, or cultural groups in different ways. When responses from multiethnic populations are combined, such variability in responding could increase variable error or bias results. The current study examines the survey response process among Black and White respondents answering questions about trust in medical researchers and participation in medical research. Using transcriptions from telephone interviews, we code a rich set of behaviors produced by respondents that past research has shown to be associated with measurement error, including long question-answer sequences, uncodable answers, requests for repetition or clarification, affective responses, and tokens. In analysis, we test for differences between Black and White respondents in the likelihood with which behaviors occur and examine whether the behaviors vary by specific categorizations of the questions, including whether the questions are racially focused. Overall, we find that White respondents produce more behaviors that indicate cognitive processing problems for racially focused questions, which may be interpreted as demonstrating a “cultural” difference in the display of cognitive processing and interaction. Data are provided by the 2013–2014 Voices Heard Survey, a computer-assisted telephone survey designed to measure respondents’ perceptions of barriers and facilitators to participating in medical research. 
    more » « less