We consider the problem of non-parametric Conditional Independence testing (CI testing) for continuous random variables. Given i.i.d samples from the joint distribution f (x, y, z) of continuous random vectors X, Y and Z, we determine whether X is independent Y |Z. We approach this by converting the conditional independence test into a classification problem. This allows us to harness very powerful classifiers like gradient-boosted trees and deep neural networks. These models can handle complex probability distributions and allow us to perform significantly better compared to the prior state of the art, for high-dimensional CI testing. The main technical challenge in the classification problem is the need for samples from the conditional product distribution fCI(x,y,z) = f(x|z)f(y|z)f(z) – the joint distribution if and only if X is independent Y |Z. – when given access only to i.i.d. samples from the true joint distribution f (x, y, z). To tackle this problem we propose a novel nearest neighbor bootstrap procedure and theoretically show that our generated samples are indeed close to f^{CI} in terms of total variational distance. We then develop theoretical results regarding the generalization bounds for classification for our problem, which translate into error bounds for CI testing. We provide a novel analysis of Rademacher type classification bounds in the presence of non-i.i.d near- independent samples. We empirically validate the performance of our algorithm on simulated and real datasets and show performance gains over previous methods.
more »
« less
GRASP: a goodness-of-fit test for classification learning
Abstract Performance of classifiers is often measured in terms of average accuracy on test data. Despite being a standard measure, average accuracy fails in characterising the fit of the model to the underlying conditional law of labels given the features vector (Y∣X), e.g. due to model misspecification, over fitting, and high-dimensionality. In this paper, we consider the fundamental problem of assessing the goodness-of-fit for a general binary classifier. Our framework does not make any parametric assumption on the conditional law Y∣X and treats that as a black-box oracle model which can be accessed only through queries. We formulate the goodness-of-fit assessment problem as a tolerance hypothesis testing of the form H0:E[Df(Bern(η(X))‖Bern(η^(X)))]≤τ where Df represents an f-divergence function, and η(x), η^(x), respectively, denote the true and an estimate likelihood for a feature vector x admitting a positive label. We propose a novel test, called Goodness-of-fit with Randomisation and Scoring Procedure (GRASP) for testing H0, which works in finite sample settings, no matter the features (distribution-free). We also propose model-X GRASP designed for model-X settings where the joint distribution of the features vector is known. Model-X GRASP uses this distributional information to achieve better power. We evaluate the performance of our tests through extensive numerical experiments.
more »
« less
- Award ID(s):
- 1844481
- PAR ID:
- 10464650
- Publisher / Repository:
- Oxford University Press
- Date Published:
- Journal Name:
- Journal of the Royal Statistical Society Series B: Statistical Methodology
- Volume:
- 86
- Issue:
- 1
- ISSN:
- 1369-7412
- Format(s):
- Medium: X Size: p. 215-245
- Size(s):
- p. 215-245
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The conditional mutual information I(X; Y|Z) measures the average information that X and Y contain about each other given Z. This is an important primitive in many learning problems including conditional independence testing, graphical model inference, causal strength estimation and time-series problems. In several applications, it is desirable to have a functional purely of the conditional distribution py|x, z rather than of the joint distribution pX, Y, Z. We define the potential conditional mutual information as the conditional mutual information calculated with a modified joint distribution pY|X, ZqX, Z, where qX, Z is a potential distribution, fixed airport. We develop K nearest neighbor based estimators for this functional, employing importance sampling, and a coupling trick, and prove the finite k consistency of such an estimator. We demonstrate that the estimator has excellent practical performance and show an application in dynamical system inference.more » « less
-
Point processes provide a powerful framework for modeling the distribution and interactions of events in time or space. Their flexibility has given rise to a variety of sophisticated models in statistics and machine learning, yet model diagnostic and criticism techniques re- main underdeveloped. In this work, we pro- pose a general Stein operator for point pro- cesses based on the Papangelou conditional intensity function. We then establish a kernel goodness-of-fit test by defining a Stein dis- crepancy measure for general point processes. Notably, our test also applies to non-Poisson point processes whose intensity functions con- tain intractable normalization constants due to the presence of complex interactions among points. We apply our proposed test to sev- eral point process models, and show that it outperforms a two-sample test based on the maximum mean discrepancy.more » « less
-
We consider the problem of hypothesis testing for discrete distributions. In the standard model, where we have sample access to an underlying distribution p, extensive research has established optimal bounds for uniformity testing, identity testing (goodness of fit), and closeness testing (equivalence or two-sample testing). We explore these problems in a setting where a predicted data distribution, possibly derived from historical data or predictive machine learning models, is available. We demonstrate that such a predictor can indeed reduce the number of samples required for all three property testing tasks. The reduction in sample complexity depends directly on the predictor’s quality, measured by its total variation distance from p. A key advantage of our algorithms is their adaptability to the precision of the prediction. Specifically, our algorithms can self-adjust their sample complexity based on the accuracy of the available prediction, operating without any prior knowledge of the estimation’s accuracy (i.e. they are consistent). Additionally, we never use more samples than the standard approaches require, even if the predictions provide no meaningful information (i.e. they are also robust). We provide lower bounds to indicate that the improvements in sample complexity achieved by our algorithms are information-theoretically optimal. Furthermore, experimental results show that the performance of our algorithms on real data significantly exceeds our worst-case guarantees for sample complexity, demonstrating the practicality of our approach.more » « less
-
Abstract Environmental decisions with substantial social and environmental implications are regularly informed by model predictions, incurring inevitable uncertainty. The selection of a set of model predictions to inform a decision is usually based on model performance, measured by goodness‐of‐fit metrics. Yet goodness‐of‐fit metrics have a questionable relationship to a model's value to end users, particularly when validation data are themselves uncertain. For example, decisions based on flow frequency models are not necessarily improved by adopting models with the best overall goodness of fit. We propose an alternative model evaluation approach based on the conditional value of sample information, first defined in 1961, which has found extensive use in sampling design optimization but which has not previously been used for model evaluation. The metric uses observations from a validation set to estimate the expected monetary costs associated with model prediction uncertainties. A model is only considered superior to alternatives if (i) its predictions reduce these costs and (ii) sufficient validation data are available to distinguish its performance from alternative models. By describing prediction uncertainties in monetary terms, the metric facilitates the communication of prediction uncertainty by end users, supporting the inclusion of uncertainty analysis in decision making.more » « less
An official website of the United States government
