skip to main content


Title: Can Conversational Interviewing Improve Survey Response Quality Without Increasing Interviewer Effects?
Summary

Several studies have shown that conversational interviewing (CI) reduces response bias for complex survey questions relative to standardized interviewing. However, no studies have addressed concerns about whether CI increases intra-interviewer correlations (IICs) in the responses collected, which could negatively impact the overall quality of survey estimates. The paper reports the results of an experimental investigation addressing this question in a national face-to-face survey. We find that CI improves response quality, as in previous studies, without substantially or frequently increasing IICs. Furthermore, any slight increases in the IICs do not offset the reduced bias in survey estimates engendered by CI.

 
more » « less
PAR ID:
10400655
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Journal of the Royal Statistical Society Series A: Statistics in Society
Volume:
181
Issue:
1
ISSN:
0964-1998
Page Range / eLocation ID:
p. 181-203
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This theory paper focuses on a research methodology, using an autoethnographic approach to reflect on the use of cognitive interviewing (CI) as a method of increasing the quality and validity of questionnaires in pre-validation design and development stages. We first provide a brief review of cognitive interviewing, sometimes called “cognitive think-aloud interviewing” or “think-aloud interviewing,” before presenting a summary of two studies conducted by the authors that used CI. Differences between these two studies are discussed as comparative cases and advice is given to scholars considering the use of CI in their own research. While this paper is not an explicit guide to conducting CI, we do intend to provide advice and wisdom for researchers who are unfamiliar with CI as a method, grounded in our experience with the method. This paper is written with a particular focus on the use of CI in engineering education research (EER) but may be more broadly applicable to other social sciences domains. 
    more » « less
  2. Survey research is at a crossroads. For at least half a century, survey data have been essential to government agencies, policy-makers, businesses, and academics across different fields to inform a wide range of critical decisions with far-reaching consequences. Even in an era of “big data,” surveys remain fundamental to understanding and shaping the economy, politics and governance, and society. Yet challenges to conducting high quality surveys are substantial and increasing. Face-to-face interviewing remains the gold standard of survey research, but the rising costs of such interviews are prohibitive. New technologies, techniques, and data sources present opportunities to improve the efficiency and speed of survey data collection and/or reduce its costs but have shortcomings that may exceed their advantages. To examine and develop strategies to address the challenges facing survey research, the Duke Initiative on Survey Methodology hosted a conference January 14th and 15th, 2021, on the Future of Survey Research. This report summarizes the proceedings and highlights key recommendations that resulted. 
    more » « less
  3. Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes—such as LV interviewing—that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent–interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey (“prerecorded video” interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing. 
    more » « less
  4. Abstract

    Rising costs and challenges of in-person interviewing have prompted major surveys to consider moving online and conducting live web-based video interviews. In this paper, we evaluate video mode effects using a two-wave experimental design in which respondents were randomized to either an interviewer-administered video or interviewer-administered in-person survey waveaftercompleting a self-administered online survey wave. This design permits testing of both within- and between-subject differences across survey modes. Our findings suggest that video interviewing is more comparable to in-person interviewing than online interviewing across multiple measures of satisficing, social desirability, and respondent satisfaction.

     
    more » « less
  5. Abstract

    Weak correspondence across different implicit bias tasks may arise from the contribution of unique forms of automatic and controlled processes to response behavior. Here, we examined the correspondence between estimates of automatic and controlled processing derived from two sequential priming tasks with identical structure and timing designed to separately measure stereotypic (Weapons Identification Task; WIT) and evaluative (Affective Priming Task; APT) associations. Across two studies using predominantly White samples, three consistent patterns emerged in the data: (a) stereotypic bias was stronger for Black targets, whereas evaluative bias was stronger for White targets; (b) overall response accuracy bias correlated modestly across the two tasks; and (c) multinomial processing tree estimates of controlled processing corresponded much more strongly than estimates of automatic processing. These findings support models positing distinct learning and memory systems for different forms of race bias, and suggest that these differing forms contribute to estimates of automatic associations.

     
    more » « less