This article illustrates some effects of dynamic adaptive design in a large government survey. We present findings from the 2015 National Survey of College Graduates Adaptive Design Experiment, including results and discussion of sample representativeness, response rates, and cost. We also consider the effect of truncating data collection (examining alternative stopping rules) on these metrics. In this experiment, we monitored sample representativeness continuously and altered data collection procedures—increasing or decreasing contact effort—to improve it. Cases that were overrepresented in the achieved sample were assigned to more passive modes of data collection (web or paper) or withheld from the group of cases that received survey reminders, whereas underrepresented cases were assigned to telephone follow-ups. The findings suggest that a dynamic adaptive survey design can improve a data quality indicator (R-indicators) without increasing cost or reducing response rate. We also find that a dynamic adaptive survey design has the potential to reduce the length of the data collection period, control cost, and increase timeliness of data delivery, if sample representativeness is prioritized over increasing the survey response rate.
- Publication Date:
- NSF-PAR ID:
- 10104973
- Journal Name:
- Journal of Survey Statistics and Methodology
- ISSN:
- 2325-0984
- Publisher:
- Oxford University Press
- Sponsoring Org:
- National Science Foundation
More Like this
-
Obeid, I. (Ed.)The Neural Engineering Data Consortium (NEDC) is developing the Temple University Digital Pathology Corpus (TUDP), an open source database of high-resolution images from scanned pathology samples [1], as part of its National Science Foundation-funded Major Research Instrumentation grant titled “MRI: High Performance Digital Pathology Using Big Data and Machine Learning” [2]. The long-term goal of this project is to release one million images. We have currently scanned over 100,000 images and are in the process of annotating breast tissue data for our first official corpus release, v1.0.0. This release contains 3,505 annotated images of breast tissue including 74 patients with cancerous diagnoses (out of a total of 296 patients). In this poster, we will present an analysis of this corpus and discuss the challenges we have faced in efficiently producing high quality annotations of breast tissue. It is well known that state of the art algorithms in machine learning require vast amounts of data. Fields such as speech recognition [3], image recognition [4] and text processing [5] are able to deliver impressive performance with complex deep learning models because they have developed large corpora to support training of extremely high-dimensional models (e.g., billions of parameters). Other fields that do notmore »
-
The purpose of the project is to identify how to measure various types of institutional support as it pertains to underrepresented and underserved populations in colleges of engineering and science. We are grounding this investigation in the Model of Co-Curricular Support, a conceptual framework that emphasizes the breadth of assistance currently used to support undergraduate students in engineering and science. The results from our study will help prioritize the elements of institutional support that should appear somewhere in a college’s suite of support efforts to improve engineering and science learning environments and design effective programs, activities, and services. Our poster will present: 1) an overview of the instrument development process; 2) evaluation of the prototype for face and content validity from students and experts; and 3) instrument revision and data collection to determine test validity and reliability across varied institutional contexts. In evaluating the initial survey, we included multiple rounds of feedback from students and experts, receiving feedback from 46 participants (38 students, 8 administrators). We intentionally sampled for representation across engineering and science colleges; gender identity; race/ethnicity; international student status; and transfer student status. The instrument was deployed for the first time in Spring 2018 to the institutional projectmore »
-
We conducted an experiment to evaluate the effects on fieldwork outcomes and interview mode of switching to a web-first mixed-mode data collection design (self-administered web interview and interviewer-administered telephone interview) from a telephone-only design. We examine whether the mixed-mode option leads to better survey outcomes, based on response rates, fieldwork outcomes, interview quality and costs. We also examine respondent characteristics associated with completing a web interview rather than a telephone interview. Our mode experiment study was conducted in the 2019 wave of the Transition into Adulthood Supplement (TAS) to the US Panel Study of Income Dynamics (PSID). TAS collects information biennially from approximately 3,000 young adults in PSID families. The shift to a mixed-mode design for TAS was aimed at reducing costs and increasing respondent cooperation. We found that for mixed-mode cases compared to telephone only cases, response rates were higher, interviews were completed faster and with lower effort, the quality of the interview data appeared better, and fieldwork costs were lower. A clear set of respondent characteristics reflecting demographic and socioeconomic characteristics, technology availability and use, time use, and psychological health were associated with completing a web interview rather than a telephone interview.
-
Abstract A prior study found that mailing prepaid incentives with $5 cash visible from outside the envelope increased the response rate to a mail survey by 4 percentage points compared to cash that was not externally visible. This “visible cash effect” suggests opportunities to improve survey response at little or no cost, but many unknowns remain. Among them: Does the visible cash effect generalize to different survey modes, respondent burdens, and cash amounts? Does it differ between fresh samples and reinterview samples? Does it affect data quality or survey costs? This article examines these questions using two linked studies where incentive visibility was randomized in a large probability sample for the American National Election Studies. The first study used $10 incentives with invitations to a long web questionnaire (median 71 minutes, n = 17,849). Visible cash increased response rates in a fresh sample for both screener and extended interview response (by 6.7 and 4.8 percentage points, respectively). Visible cash did not increase the response rate in a reinterview sample where the baseline reinterview response rate was very high (72 percent). The second study used $5 incentives with invitations to a mail-back paper questionnaire (n = 8,000). Visible cash increased the response rate in a samplemore »
-
Background Internet data can be used to improve infectious disease models. However, the representativeness and individual-level validity of internet-derived measures are largely unexplored as this requires ground truth data for study. Objective This study sought to identify relationships between Web-based behaviors and/or conversation topics and health status using a ground truth, survey-based dataset. Methods This study leveraged a unique dataset of self-reported surveys, microbiological laboratory tests, and social media data from the same individuals toward understanding the validity of individual-level constructs pertaining to influenza-like illness in social media data. Logistic regression models were used to identify illness in Twitter posts using user posting behaviors and topic model features extracted from users’ tweets. Results Of 396 original study participants, only 81 met the inclusion criteria for this study. Of these participants’ tweets, we identified only two instances that were related to health and occurred within 2 weeks (before or after) of a survey indicating symptoms. It was not possible to predict when participants reported symptoms using features derived from topic models (area under the curve [AUC]=0.51; P=.38), though it was possible using behavior features, albeit with a very small effect size (AUC=0.53; P≤.001). Individual symptoms were also generally not predictable either.more »