skip to main content


Title: Interviewer Effects in Live Video and Prerecorded Video Interviewing.
Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes—such as LV interviewing—that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent–interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey (“prerecorded video” interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing.  more » « less
Award ID(s):
1825113
NSF-PAR ID:
10347061
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of survey statistics and methodology
Volume:
10
Issue:
2
ISSN:
2325-0984
Page Range / eLocation ID:
317-336
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Response time (RT) – the time elapsing from the beginning of question reading for a given question until the start of the next question – is a potentially important indicator of data quality that can be reliably measured for all questions in a computer-administered survey using a latent timer (i.e., triggered automatically by moving on to the next question). In interviewer-administered surveys, RTs index data quality by capturing the entire length of time spent on a question–answer sequence, including interviewer question-asking behaviors and respondent question-answering behaviors. Consequently, longer RTs may indicate longer processing or interaction on the part of the interviewer, respondent, or both. RTs are an indirect measure of data quality; they do not directly measure reliability or validity, and we do not directly observe what factors lengthen the administration time. In addition, either too long or too short RTs could signal a problem (Ehlen, Schober, and Conrad 2007). However, studies that link components of RTs (interviewers’ question reading and response latencies) to interviewer and respondent behaviors that index data quality strengthen the claim that RTs indicate data quality (Bergmann and Bristle 2019; Draisma and Dijkstra 2004; Olson, Smyth, and Kirchner 2019). In general, researchers tend to consider longer RTs as signaling processing problems for the interviewer, respondent, or both (Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Olson 2013; Yan and Tourangeau 2008). Previous work demonstrates that RTs are associated with various characteristics of interviewers (where applicable), questions, and respondents in web, telephone, and face-to-face interviews (e.g., Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Tourangeau 2008). We replicate and extend this research by examining how RTs are associated with various question characteristics and several established tools for evaluating questions. We also examine whether increased interviewer experience in the study shortens RTs for questions with characteristics that impact the complexity of the interviewer’s task (i.e., interviewer instructions and parenthetical phrases). We examine these relationships in the context of a sample of racially diverse respondents who answered questions about participation in medical research and their health. 
    more » « less
  2. This paper summarizes a set of design considerations that survey researchers exploring the potential for live video to substitute for in-person interviewing will need to address. While the solutions appropriate for a particular study are likely to vary, researchers will need to consider (at least) which sample members have access to video and will be willing and able to participate, which video platform(s) to use, whether interviews need to be scheduled in advance or are conducted on demand, how interviewers’ screens should be configured, the interviewer’s visual background and auditory environment, and how interviewers should be trained to administer video interviews, avoid bias, and be prepared to handle technological problems as they arise. 
    more » « less
  3. Abstract

    Rising costs and challenges of in-person interviewing have prompted major surveys to consider moving online and conducting live web-based video interviews. In this paper, we evaluate video mode effects using a two-wave experimental design in which respondents were randomized to either an interviewer-administered video or interviewer-administered in-person survey waveaftercompleting a self-administered online survey wave. This design permits testing of both within- and between-subject differences across survey modes. Our findings suggest that video interviewing is more comparable to in-person interviewing than online interviewing across multiple measures of satisficing, social desirability, and respondent satisfaction.

     
    more » « less
  4. This WIP paper describes a team approach to phenomenography on ethical engineering practice in the health products industry and its potential impact on research quality. Although qualitative researchers often conduct phenomenography collaboratively, most often a single individual leads the data collection and analysis; others primarily serve as critical reviewers. However, quality may be enhanced by involving collaborators as data analysts in “sustained cycles of scrutiny, debate and testing against the data” [1, p. 88], thus interweaving unique perspectives and insights throughout the analysis process. Nonetheless, collaborating in this intensive data analysis process also presents unique challenges. In this paper, we (1) describe the processes we are applying in an integrated team-based phenomenographic study, (2) identify how the team approach affects research quality, and (3) reflect on the challenges inherent to this process. We ground this reflective case study in the methodological literature on phenomenography. Our team strategies include multiple interviewers (and, when possible, two interviewers per inter-view), team communication through reflective memos, and integration of individual and team-based data analysis with peer critique of individual analyses. We compare our team approach with typical individual phenomenographic approaches, and we align our procedures with the five strategies of the Qualifying Qualitative Research Quality Framework, or Q3, designed by Walther, Sochacka, and Kellam [2]. In aligning strategies, we consider benefits and trade-offs. 
    more » « less
  5. Abstract

    Interviewers’ postinterview evaluations of respondents’ performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents’ and interviewers’ sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process—various respondents’ behaviors and response quality indicators—are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents’ and interviewers’ sociodemographic characteristics. Our results show that both respondents’ behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers’ gender. Further, IEPs were associated with respondents’ education and ethnoracial identity, net of respondents’ behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.

     
    more » « less