skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Carving model-free inference
Complex studies involve many steps. Selecting promising findings based on pilot data is a first step. As more observations are collected, the investigator must decide how to combine the new data with the pilot data to construct valid selective inference. Carving, introduced by Fithian, Sun and Taylor (2014), enables the reuse of pilot data during selective inference and accounts for overoptimism from the selection process. However, currently, carving is only justified for parametric models such as the commonly used Gaussian model. In this paper, we develop the asymptotic theory to substantiate the use of carving beyond Gaussian models. Our results indicate that carving produces valid and tight confidence intervals within a model-free setting, as demonstrated through simulated and real instances.  more » « less
Award ID(s):
1951980 2113342
PAR ID:
10543405
Author(s) / Creator(s):
Publisher / Repository:
Institute of Mathematical Statistics
Date Published:
Journal Name:
The Annals of Statistics
Volume:
51
Issue:
6
ISSN:
0090-5364
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Regina Liu (Ed.)
    Several strategies have been developed recently to ensure valid inference after model selection; some of these are easy to compute, while others fare better in terms of inferential power. In this article, we consider a selective inference framework for Gaussian data. We propose a new method for inference through approximate maximum likelihood estimation. Our goal is to: (a) achieve better inferential power with the aid of randomization, (b) bypass expensive MCMC sampling from exact conditional distributions that are hard to evaluate in closed forms. We construct approximate inference, for example, p-values, confidence intervals etc., by solving a fairly simple, convex optimization problem. We illustrate the potential of our method across wide-ranging values of signal-to-noise ratio in simulations. On a cancer gene expression dataset we find that our method improves upon the inferential power of some commonly used strategies for selective inference. 
    more » « less
  2. We introduce a pivot for exact selective inference with randomization. Not only does our pivot lead to exact inference in Gaussian regression models, but it is also available in closed form. We reduce this problem to inference for a bivariate truncated Gaussian variable. By doing so, we give up some power that is achieved with approximate maximum likelihood estimation in Panigrahi & Taylor (2023). Yet our pivot always produces narrower confidence intervals than a closely related data-splitting procedure. We investigate the trade-off between power and exact selective inference on simulated datasets and an HIV drug resistance dataset. 
    more » « less
  3. Abstract Integral projection models (IPMs) are widely used for studying continuously size‐structured populations. IPMs require a growth sub‐model that describes the probability of future size conditional on current size and any covariates. Most IPM studies assume that this distribution is Gaussian, despite calls for non‐Gaussian models that accommodate skewness and excess kurtosis. We provide a general workflow for accommodating non‐Gaussian growth patterns while retaining important covariates and random effects. Our approach emphasizes visual diagnostics from pilot Gaussian models and quantile‐based metrics of skewness and kurtosis that guide selection of a non‐Gaussian alternative, if necessary. Across six case studies, skewness and excess kurtosis were common features of growth data, and non‐Gaussian models consistently generated simulated data that were more consistent with real data than pilot Gaussian models. However, effects of “improved” growth modeling on IPM results were moderate to weak and differed in direction or magnitude between different outputs from the same model. Using tools not available when IPMs were first developed, it is now possible to fit non‐Gaussian models to growth data without sacrificing ecological complexity. Doing so, as guided by careful interrogation of the data, will result in models that better represent the populations for which they are intended. 
    more » « less
  4. Abstract Geostatistical modeling for continuous point‐referenced data has extensively been applied to neuroimaging because it produces efficient and valid statistical inference. However, diffusion tensor imaging (DTI), a neuroimaging technique characterizing the brain's anatomical structure, produces a positive‐definite (p.d.) matrix for each voxel. Currently, only a few geostatistical models for p.d. matrices have been proposed because introducing spatial dependence among p.d. matrices properly is challenging. In this paper, we use the spatial Wishart process, a spatial stochastic process (random field), where each p.d. matrix‐variate random variable marginally follows a Wishart distribution, and spatial dependence between random matrices is induced by latent Gaussian processes. This process is valid on an uncountable collection of spatial locations and is almost‐surely continuous, leading to a reasonable way of modeling spatial dependence. Motivated by a DTI data set of cocaine users, we propose a spatial matrix‐variate regression model based on the spatial Wishart process. A problematic issue is that the spatial Wishart process has no closed‐form density function. Hence, we propose an approximation method to obtain a feasible Cholesky decomposition model, which we show to be asymptotically equivalent to the spatial Wishart process model. A local likelihood approximation method is also applied to achieve fast computation. The simulation studies and real data application demonstrate that the Cholesky decomposition process model produces reliable inference and improved performance, compared to other methods. 
    more » « less
  5. When releasing data to the public, a vital concern is the risk of exposing personal information of the individuals who have contributed to the data set. Many mechanisms have been proposed to protect individual privacy, though less attention has been dedicated to practically conducting valid inferences on the altered privacy-protected data sets. For frequency tables, the privacy-protection-oriented perturbations often lead to negative cell counts. Releasing such tables can undermine users’ confidence in the usefulness of such data sets. This paper focuses on releasing one-way frequency tables. We recommend an optimal mechanism that satisfies ϵ-differential privacy (DP) without suffering from having negative cell counts. The procedure is optimal in the sense that the expected utility is maximized under a given privacy constraint. Valid inference procedures for testing goodness-of-fit are also developed for the DP privacy-protected data. In particular, we propose a de-biased test statistic for the optimal procedure and derive its asymptotic distribution. In addition, we also introduce testing procedures for the commonly used Laplace and Gaussian mechanisms, which provide a good finite sample approximation for the null distributions. Moreover, the decaying rate requirements for the privacy regime are provided for the inference procedures to be valid. We further consider common users’ practices such as merging related or neighboring cells or integrating statistical information obtained across different data sources and derive valid testing procedures when these operations occur. Simulation studies show that our inference results hold well even when the sample size is relatively small. Comparisons with the current field standards, including the Laplace, the Gaussian (both with/without post-processing of replacing negative cell counts with zeros), and the Binomial-Beta McClure-Reiter mechanisms, are carried out. In the end, we apply our method to the National Center for Early Development and Learning’s (NCEDL) multi-state studies data to demonstrate its practical applicability. 
    more » « less