skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A randomized experiment evaluating survey mode effects for video interviewing
Abstract Rising costs and challenges of in-person interviewing have prompted major surveys to consider moving online and conducting live web-based video interviews. In this paper, we evaluate video mode effects using a two-wave experimental design in which respondents were randomized to either an interviewer-administered video or interviewer-administered in-person survey waveaftercompleting a self-administered online survey wave. This design permits testing of both within- and between-subject differences across survey modes. Our findings suggest that video interviewing is more comparable to in-person interviewing than online interviewing across multiple measures of satisficing, social desirability, and respondent satisfaction.  more » « less
Award ID(s):
1835022
PAR ID:
10475910
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Cambridge University Press
Date Published:
Journal Name:
Political Science Research and Methods
Volume:
11
Issue:
1
ISSN:
2049-8470
Page Range / eLocation ID:
144 to 159
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Live video (LV) communication tools (e.g., Zoom) have the potential to provide survey researchers with many of the benefits of in-person interviewing, while also greatly reducing data collection costs, given that interviewers do not need to travel and make in-person visits to sampled households. The COVID-19 pandemic has exposed the vulnerability of in-person data collection to public health crises, forcing survey researchers to explore remote data collection modes—such as LV interviewing—that seem likely to yield high-quality data without in-person interaction. Given the potential benefits of these technologies, the operational and methodological aspects of video interviewing have started to receive research attention from survey methodologists. Although it is remote, video interviewing still involves respondent–interviewer interaction that introduces the possibility of interviewer effects. No research to date has evaluated this potential threat to the quality of the data collected in video interviews. This research note presents an evaluation of interviewer effects in a recent experimental study of alternative approaches to video interviewing including both LV interviewing and the use of prerecorded videos of the same interviewers asking questions embedded in a web survey (“prerecorded video” interviewing). We find little evidence of significant interviewer effects when using these two approaches, which is a promising result. We also find that when interviewer effects were present, they tended to be slightly larger in the LV approach as would be expected in light of its being an interactive approach. We conclude with a discussion of the implications of these findings for future research using video interviewing. 
    more » « less
  2. Ethnoracial identity refers to the racial and ethnic categories that people use to classify themselves and others. How it is measured in surveys has implications for understanding inequalities. Yet how people self-identify may not conform to the categories standardized survey questions use to measure ethnicity and race, leading to potential measurement error. In interviewer-administered surveys, answers to survey questions are achieved through interviewer–respondent interaction. An analysis of interviewer–respondent interaction can illuminate whether, when, how, and why respondents experience problems with questions. In this study, we examine how indicators of interviewer–respondent interactional problems vary across ethnoracial groups when respondents answer questions about ethnicity and race. Further, we explore how interviewers respond in the presence of these interactional problems. Data are provided by the 2013–2014 Voices Heard Survey, a computer-assisted telephone survey designed to measure perceptions of participating in medical research among an ethnoracially diverse sample of respondents. 
    more » « less
  3. Abstract In recent years, household surveys have expended significant effort to counter well-documented increases in direct refusals and greater difficulty contacting survey respondents. A substantial amount of fieldwork effort in panel surveys using telephone interviewing is devoted to the task of contacting the respondent to schedule the day and time of the interview. Higher fieldwork effort leads to greater costs and is associated with lower response rates. A new approach was experimentally evaluated in the 2017 wave of the Panel Study of Income Dynamics (PSID) Transition into Adulthood Supplement (TAS) that allowed a randomly selected subset of respondents to choose their own day and time of their telephone interview through the use of an online appointment scheduler. TAS is a nationally representative study of US young adults aged 18–28 years embedded within the worlds’ longest running panel study, the PSID. This paper experimentally evaluates the effect of offering the online appointment scheduler on fieldwork outcomes, including number of interviewer contact attempts and interview sessions, number of days to complete the interview, and response rates. We describe panel study members’ characteristics associated with uptake of the online scheduler and examine differences in the effectiveness of the treatment across subgroups. Finally, potential cost-savings of fieldwork effort due to the online appointment scheduler are evaluated. 
    more » « less
  4. This paper summarizes a set of design considerations that survey researchers exploring the potential for live video to substitute for in-person interviewing will need to address. While the solutions appropriate for a particular study are likely to vary, researchers will need to consider (at least) which sample members have access to video and will be willing and able to participate, which video platform(s) to use, whether interviews need to be scheduled in advance or are conducted on demand, how interviewers’ screens should be configured, the interviewer’s visual background and auditory environment, and how interviewers should be trained to administer video interviews, avoid bias, and be prepared to handle technological problems as they arise. 
    more » « less
  5. Purpose:Numerous tasks have been developed to measure receptive vocabulary, many of which were designed to be administered in person with a trained researcher or clinician. The purpose of the current study is to compare a common, in-person test of vocabulary with other vocabulary assessments that can be self-administered. Method:Fifty-three participants completed the Peabody Picture Vocabulary Test (PPVT) via online video call to mimic in-person administration, as well as four additional fully automated, self-administered measures of receptive vocabulary. Participants also completed three control tasks that do not measure receptive vocabulary. Results:Pearson correlations indicated moderate correlations among most of the receptive vocabulary measures (approximatelyr= .50–.70). As expected, the control tasks revealed only weak correlations to the vocabulary measures. However, subsets of items of the four self-administered measures of receptive vocabulary achieved high correlations with the PPVT (r> .80). These subsets were found through a repeated resampling approach. Conclusions:Measures of receptive vocabulary differ in which items are included and in the assessment task (e.g., lexical decision, picture matching, synonym matching). The results of the current study suggest that several self-administered tasks are able to achieve high correlations with the PPVT when a subset of items are scored, rather than the full set of items. These data provide evidence that subsets of items on one behavioral assessment can more highly correlate to another measure. In practical terms, these data demonstrate that self-administered, automated measures of receptive vocabulary can be used as reasonable substitutes of at least one test (PPVT) that requires human interaction. That several of the fully automated measures resulted in high correlations with the PPVT suggests that different tasks could be selected depending on the needs of the researcher. It is important to note the aim was not to establish clinical relevance of these measures, but establish whether researchers could use an experimental task of receptive vocabulary that probes a similar construct to what is captured by the PPVT, and use these measures of individual differences. 
    more » « less