Many data sources, including tracking social behav- ior to election polling to testing studies for understanding disease spread, are subject to sampling bias whose implications are not fully yet understood. In this paper we study estimation of a given feature (such as disease, or behavior at social media platforms) from biased samples, treating non-respondent individuals as missing data. Prevalence of the feature among sampled individuals has an upward bias under the assumption of individuals’ willingness to be sampled. This can be viewed as a regression model with symptoms as covariates and the feature as outcome. It is assumed that the outcome is unknown at the time of sampling, and therefore the missingness mechanism only depends on the covariates. We show that data, in spite of this, is missing at random only when the sizes of symptom classes in the population are known; otherwise data is missing not at random. With an information theoretic viewpoint, we show that sampling bias corresponds to external information due to individuals in the population knowing their covariates, and we quantify this external information by active information. The reduction in prevalence, when sampling bias is adjusted for, similarly translates into active information due to bias correction, with opposite sign to active information due to testing bias. We develop unified results that show that prevalence and active information estimates are asymptotically normal under all missing data mechanisms, when testing errors are absent and present respectively. The asymptotic behavior of the estimators is illustrated through simulations.
more »
« less
Data Integration by Combining Big Data and Survey Sample Data for Finite Population Inference
Summary The statistical challenges in using big data for making valid statistical inference in the finite population have been well documented in literature. These challenges are due primarily to statistical bias arising from under‐coverage in the big data source to represent the population of interest and measurement errors in the variables available in the data set. By stratifying the population into a big data stratum and a missing data stratum, we can estimate the missing data stratum by using a fully responding probability sample and hence the population as a whole by using a data integration estimator. By expressing the data integration estimator as a regression estimator, we can handle measurement errors in the variables in big data and also in the probability sample. We also propose a fully nonparametric classification method for identifying the overlapping units and develop a bias‐corrected data integration estimator under misclassification errors. Finally, we develop a two‐step regression data integration estimator to deal with measurement errors in the probability sample. An advantage of the approach advocated in this paper is that we do not have to make unrealistic missing‐at‐random assumptions for the methods to work. The proposed method is applied to the real data example using 2015–2016 Australian Agricultural Census data.
more »
« less
- Award ID(s):
- 1733572
- PAR ID:
- 10449561
- Publisher / Repository:
- Wiley-Blackwell
- Date Published:
- Journal Name:
- International Statistical Review
- Volume:
- 89
- Issue:
- 2
- ISSN:
- 0306-7734
- Page Range / eLocation ID:
- p. 382-401
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Trip generation, a critical first step in travel demand forecasting, requires not only estimating trips from the observed sample data, but also calculating the total number of trips in the population, including both the observed trips and the trips missed from the sample (we call them missing trips in this paper). The latter, how to recover missing trips, is scarcely studied in the academic literature, and the state-of-the-art practice is through the application of sample weights to extrapolate from observed trips to the population total. In recent years, big location-based service (LBS) has become a promising alternative data source (in addition to household travel survey data) in trip generation. Because users self-select into using different mobile services that result in LBS data, selection bias exists in the LBS data, and the kinds of trips excluded or included differ systematically among data sources. This study addresses this issue and develops a behaviorally informed approach to quantify the selection biases and recover missing trips. The key idea is that because biases reflected in different data sources are likely different, the integration of multiple biased data sources will mitigate biases. This is achieved by formulating a capture probability that specifies the probability of capturing a trip in a data set as a function of various behavioral factors (e.g., socio-demographics and area-related factors) and estimating the associated parameters through maximum likelihood or Bayesian methods. This approach is evaluated through experimental studies that test the effects of data and model uncertainty on its ability of recovering missing trips. The model is also applied to two real-world case studies: one using the 2017 National Household Travel Survey data and the other using two LBS data sets. Our results demonstrate the robustness of the model in recovering missing trips, even when the analyst completely mis-specifies the underlying trip generation process and the capture probability functions (for quantifying selection biases). The developed methodology can be scalable to any number of data sets and is applicable to both big and small data sets. History: This paper has been accepted for the Transportation Science Special Issue on Machine Learning Methods for Urban Mobility. Funding: This work was supported by the Division of Civil, Mechanical and Manufacturing Innovation [Grant 2114260], the National Institute of General Medical Sciences [Grant 1R01GM108731-01A1], and the U.S. Department of Transportation [Grant 69A3551747116]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/trsc.2024.0550 .more » « less
-
Abstract Mendelian randomization (MR) has been a popular method in genetic epidemiology to estimate the effect of an exposure on an outcome using genetic variants as instrumental variables (IV), with two‐sample summary‐data MR being the most popular. Unfortunately, instruments in MR studies are often weakly associated with the exposure, which can bias effect estimates and inflate Type I errors. In this work, we propose test statistics that are robust under weak‐instrument asymptotics by extending the Anderson–Rubin, Kleibergen, and the conditional likelihood ratio test in econometrics to two‐sample summary‐data MR. We also use the proposed Anderson–Rubin test to develop a point estimator and to detect invalid instruments. We conclude with a simulation and an empirical study and show that the proposed tests control size and have better power than existing methods with weak instruments.more » « less
-
ABSTRACT Probability surveys are challenged by increasing nonresponse rates, resulting in biased statistical inference. Auxiliary information about populations can be used to reduce bias in estimation. Often continuous auxiliary variables in administrative records are first discretized before releasing to the public to avoid confidentiality breaches. This may weaken the utility of the administrative records in improving survey estimates, particularly when there is a strong relationship between continuous auxiliary information and the survey outcome. In this paper, we propose a two‐step strategy, where the confidential continuous auxiliary data in the population are first utilized to estimate the response propensity score of the survey sample by statistical agencies, which is then included in a modified population data for data users. In the second step, data users who do not have access to confidential continuous auxiliary data conduct predictive survey inference by including discretized continuous variables and the propensity score as predictors using splines in a Bayesian model. We show by simulation that the proposed method performs well, yielding more efficient estimates of population means with 95% credible intervals providing better coverage than alternative approaches. We illustrate the proposed method using the Ohio Army National Guard Mental Health Initiative (OHARNG‐MHI). The methods developed in this work are readily available in the R packageAuxSurvey.more » « less
-
Summary CRISPR genome engineering and single-cell RNA sequencing have accelerated biological discovery. Single-cell CRISPR screens unite these two technologies, linking genetic perturbations in individual cells to changes in gene expression and illuminating regulatory networks underlying diseases. Despite their promise, single-cell CRISPR screens present considerable statistical challenges. We demonstrate through theoretical and real data analyses that a standard method for estimation and inference in single-cell CRISPR screens—“thresholded regression”—exhibits attenuation bias and a bias-variance tradeoff as a function of an intrinsic, challenging-to-select tuning parameter. To overcome these difficulties, we introduce GLM-EIV (“GLM-based errors-in-variables”), a new method for single-cell CRISPR screen analysis. GLM-EIV extends the classical errors-in-variables model to responses and noisy predictors that are exponential family-distributed and potentially impacted by the same set of confounding variables. We develop a computational infrastructure to deploy GLM-EIV across hundreds of processors on clouds (e.g. Microsoft Azure) and high-performance clusters. Leveraging this infrastructure, we apply GLM-EIV to analyze two recent, large-scale, single-cell CRISPR screen datasets, yielding several new insights.more » « less
An official website of the United States government
