skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 10:00 PM ET on Friday, December 8 until 2:00 AM ET on Saturday, December 9 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on May 24, 2024

Title: Questioning Identity: How a Diverse Set of Respondents Answer Standard Questions About Ethnicity and Race

Ethnoracial identity refers to the racial and ethnic categories that people use to classify themselves and others. How it is measured in surveys has implications for understanding inequalities. Yet how people self-identify may not conform to the categories standardized survey questions use to measure ethnicity and race, leading to potential measurement error. In interviewer-administered surveys, answers to survey questions are achieved through interviewer–respondent interaction. An analysis of interviewer–respondent interaction can illuminate whether, when, how, and why respondents experience problems with questions. In this study, we examine how indicators of interviewer–respondent interactional problems vary across ethnoracial groups when respondents answer questions about ethnicity and race. Further, we explore how interviewers respond in the presence of these interactional problems. Data are provided by the 2013–2014 Voices Heard Survey, a computer-assisted telephone survey designed to measure perceptions of participating in medical research among an ethnoracially diverse sample of respondents.

 
more » « less
NSF-PAR ID:
10415878
Author(s) / Creator(s):
 ;  ;  ;  ;  ;  
Publisher / Repository:
SAGE Publications
Date Published:
Journal Name:
Field Methods
ISSN:
1525-822X
Page Range / eLocation ID:
Article No. 1525822X2311738
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Response time (RT) – the time elapsing from the beginning of question reading for a given question until the start of the next question – is a potentially important indicator of data quality that can be reliably measured for all questions in a computer-administered survey using a latent timer (i.e., triggered automatically by moving on to the next question). In interviewer-administered surveys, RTs index data quality by capturing the entire length of time spent on a question–answer sequence, including interviewer question-asking behaviors and respondent question-answering behaviors. Consequently, longer RTs may indicate longer processing or interaction on the part of the interviewer, respondent, or both. RTs are an indirect measure of data quality; they do not directly measure reliability or validity, and we do not directly observe what factors lengthen the administration time. In addition, either too long or too short RTs could signal a problem (Ehlen, Schober, and Conrad 2007). However, studies that link components of RTs (interviewers’ question reading and response latencies) to interviewer and respondent behaviors that index data quality strengthen the claim that RTs indicate data quality (Bergmann and Bristle 2019; Draisma and Dijkstra 2004; Olson, Smyth, and Kirchner 2019). In general, researchers tend to consider longer RTs as signaling processing problems for the interviewer, respondent, or both (Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Olson 2013; Yan and Tourangeau 2008). Previous work demonstrates that RTs are associated with various characteristics of interviewers (where applicable), questions, and respondents in web, telephone, and face-to-face interviews (e.g., Couper and Kreuter 2013; Olson and Smyth 2015; Yan and Tourangeau 2008). We replicate and extend this research by examining how RTs are associated with various question characteristics and several established tools for evaluating questions. We also examine whether increased interviewer experience in the study shortens RTs for questions with characteristics that impact the complexity of the interviewer’s task (i.e., interviewer instructions and parenthetical phrases). We examine these relationships in the context of a sample of racially diverse respondents who answered questions about participation in medical research and their health. 
    more » « less
  2. Abstract

    Interviewers’ postinterview evaluations of respondents’ performance (IEPs) are paradata, used to describe the quality of the data obtained from respondents. IEPs are driven by a combination of factors, including respondents’ and interviewers’ sociodemographic characteristics and what actually transpires during the interview. However, relatively few studies examine how IEPs are associated with features of the response process, including facets of the interviewer-respondent interaction and patterns of responding that index data quality. We examine whether features of the response process—various respondents’ behaviors and response quality indicators—are associated with IEPs in a survey with a diverse set of respondents focused on barriers and facilitators to participating in medical research. We also examine whether there are differences in IEPs across respondents’ and interviewers’ sociodemographic characteristics. Our results show that both respondents’ behaviors and response quality indicators predict IEPs, indicating that IEPs reflect what transpires in the interview. In addition, interviewers appear to approach the task of evaluating respondents with differing frameworks, as evidenced by the variation in IEPs attributable to interviewers and associations between IEPs and interviewers’ gender. Further, IEPs were associated with respondents’ education and ethnoracial identity, net of respondents’ behaviors, response quality indicators, and sociodemographic characteristics of respondents and interviewers. Future research should continue to build on studies that examine the correlates of IEPs to better inform whether, when, and how to use IEPs as paradata about the quality of the data obtained.

     
    more » « less
  3. Abstract

    Asking questions fluently, exactly as worded, and at a reasonable pace is a fundamental part of a survey interviewer’s role. Doing so allows the question to be asked as intended by the researcher and may decrease the risk of measurement error and contribute to rapport. Despite the central importance placed on reading questions exactly as worded, interviewers commonly misread questions, and it is not always clear why. Thus, understanding the risk of measurement error requires understanding how different interviewers, respondents, and question features may trigger question reading problems. In this article, we evaluate the effects of question features on question asking behaviors, controlling for interviewer and respondent characteristics. We also examine how question asking behaviors are related to question-asking time. Using two nationally representative telephone surveys in the United States, we find that longer questions and questions with transition statements are less likely to be read exactly and fluently, that questions with higher reading levels and parentheticals are less likely to be read exactly across both surveys and that disfluent readings decrease as interviewers gain experience across the field period. Other question characteristics vary in their associations with the outcomes across the two surveys. We also find that inexact and disfluent question readings are longer, but read at a faster pace, than exact and fluent question reading. We conclude with implications for interviewer training and questionnaire design.

     
    more » « less
  4. Motivation. Teachers can play a role in disrupting social inequities that are reflected in education, such as racial disparities in who succeeds in CS. Professional learning addressing inequities causes teachers to confront difficult topics, including how their own identities impact these problems. Understanding the differing ways teachers’ identities surface can provide insights into designing better supports for their professional learning. Objectives. The goal of this paper is to examine the teaching and racial identities of two secondary CS teachers who participated in professional learning focused on combining CS content and equity pedagogy. The second goal of this paper is to demonstrate how discourse analytic methods can be used to examine interviews and other interactional data. Method. Teachers were interviewed individually about their teaching identity, racial identity, and professional learning. Drawing on Bucholtz and Hall’s identity and interaction framework, interviews were examined for linguistic and discursive features reflecting positionality (i.e., how identity surfaces through the way individuals present themselves to and are perceived by others) and indexicality (i.e., various ways of referring to an identity). Results. Participants used personal deictics, quotative markers, code choice, and affective and epistemic stances when discussing and negotiating their identities with the interviewer. The data reflected ways teachers problematized questions about teaching identity, negotiated tensions in their disciplinary identities, found the topic of race difficult to address, and highlighted other aspects of their identities relevant to understanding and discussing race. Discussion. The study provides a demonstration of how discourse analytic methods can reveal nuances of teacher identity that may be overlooked with other qualitative approaches. Findings also revealed how teachers’ ethnic identities might be used as a lever in helping teachers discuss the difficult topic of race in education. Discourse analytic methods are encouraged for future CS education research focused on interactional analyses. 
    more » « less
  5. Abstract

    One of the most difficult tasks facing survey researchers is balancing the imperative to keep surveys short with the need to measure important concepts accurately. Not only are long batteries prohibitively expensive but lengthy surveys can also lead to less informative answers from respondents. Yet, scholars often wish to measure traits that require a multi-item battery. To resolve these contradicting constraints, we propose the use of adaptive inventories. This approach uses computerized adaptive testing methods to minimize the number of questions each respondent must answer while maximizing the accuracy of the resulting measurement. We provide evidence supporting the utility of adaptive inventories through an empirically informed simulation study, an experimental study, and a detailed case study using data from the 2016 American National Election Study (ANES) Pilot. The simulation and experiment illustrate the superior performance of adaptive inventories relative to fixed-reduced batteries in terms of precision and accuracy. The ANES analysis serves as an illustration of how adaptive inventories can be developed and fielded and also validates an adaptive inventory with a nationally representative sample. Critically, we provide extensive software tools that allow researchers to incorporate adaptive inventories into their own surveys.

     
    more » « less