Strictly proper scoring rules (SPSR) are incentive compatible for eliciting information about random variables from strategic agents when the principal can reward agents after the realization of the random variables. They also quantify the quality of elicited information, with more accurate predictions receiving higher scores in expectation. In this paper, we extend such scoring rules to settings where a principal elicits private probabilistic beliefs but only has access to agents’ reports. We name our solution Surrogate Scoring Rules (SSR). SSR is built on a bias correction step and an error rate estimation procedure for a reference answer defined using agents’ reports. We show that, with a little information about the prior distribution of the random variables, SSR in a multi-task setting recover SPSR in expectation, as if having access to the ground truth. Therefore, a salient feature of SSR is that they quantify the quality of information despite the lack of ground truth, just as SPSR do for the setting with ground truth. As a by-product, SSR induce dominant uniform strategy truthfulness in reporting. Our method is verified both theoretically and empirically using data collected from real human forecasters.
more »
« less
Gács-Körner Common Information Variational Autoencoder
We propose a notion of common information that allows one to quantify and separate the information that is shared between two random variables from the information that is unique to each. Our notion of common information is defined by an optimization problem over a family of functions and recovers the Gács-Körner common information as a special case. Importantly, our notion can be approximated empirically using samples from the underlying data distribution. We then provide a method to partition and quantify the common and unique information using a simple modification of a traditional variational auto-encoder. Empirically, we demonstrate that our formulation allows us to learn semantically meaningful common and unique factors of variation even on high-dimensional data such as images and videos. Moreover, on datasets where ground-truth latent factors are known, we show that we can accurately quantify the common information between the random variables.
more »
« less
- Award ID(s):
- 1943467
- PAR ID:
- 10520602
- Publisher / Repository:
- NeurIPS
- Date Published:
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Learning disentangled causal representations is a challenging problem that has gained significant attention recently due to its implications for extracting meaningful information for downstream tasks. In this work, we define a new notion of causal disentanglement from the perspective of independent causal mechanisms. We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels. We model causal mechanisms using nonlinear learnable flow-based diffeomorphic functions to map noise variables to latent causal variables. Further, to promote the disentanglement of causal factors, we propose a causal disentanglement prior learned from auxiliary labels and the latent causal structure. We theoretically show the identifiability of causal factors and mechanisms up to permutation and elementwise reparameterization. We empirically demonstrate that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.more » « less
-
At the biosphere–atmosphere interface, nonlinear interdependencies among components of an ecohydrological complex system can be inferred using multivariate high frequency time series observations. Information flow among these interacting variables allows us to represent the causal dependencies in the form of a directed acyclic graph (DAG). We use high frequency multivariate data at 10 Hz from an eddy covariance instrument located at 25 m above agricultural land in the Midwestern US to quantify the evolutionary dynamics of this complex system using a sequence of DAGs by examining the structural dependency of information flow and the associated functional response. We investigate whether functional differences correspond to structural differences or if there are no functional variations despite the structural differences. We base our analysis on the hypothesis that causal dependencies are instigated through information flow, and the resulting interactions sustain the dynamics and its functionality. To test our hypothesis, we build upon causal structure analysis in the companion paper to characterize the information flow in similarly clustered DAGs from 3-min non-overlapping contiguous windows in the observational data. We characterize functionality as the nature of interactions as discerned through redundant, unique, and synergistic components of information flow. Through this analysis, we find that in turbulence at the biosphere–atmosphere interface, the variables that control the dynamic character of the atmosphere as well as the thermodynamics are driven by non-local conditions, while the scalar transport associated with CO2 and H2O is mainly driven by short-term local conditions.more » « less
-
Conditional Mutual Information (CMI) is a measure of conditional dependence between random variables X and Y, given another random variable Z. It can be used to quantify conditional dependence among variables in many data-driven inference problems such as graphical models, causal learning, feature selection and time-series analysis. While k-nearest neighbor (kNN) based estimators as well as kernel-based methods have been widely used for CMI estimation, they suffer severely from the curse of dimensionality. In this paper, we leverage advances in classifiers and generative models to design methods for CMI estimation. Specifically, we introduce an estimator for KL-Divergence based on the likelihood ratio by training a classifier to distinguish the observed joint distribution from the product distribution. We then show how to construct several CMI estimators using this basic divergence estimator by drawing ideas from conditional generative models. We demonstrate that the estimates from our proposed approaches do not degrade in performance with increasing dimension and obtain significant improvement over the widely used KSG estimator. Finally, as an application of accurate CMI estimation, we use our best estimator for conditional independence testing and achieve superior performance than the state-of-the-art tester on both simulated and real data-sets.more » « less
-
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner. A number of recent efforts have focused on learning representations that disentangle statistically independent axes of variation by introducing modifications to the standard objective function. These approaches generally assume a simple diagonal Gaussian prior and as a result are not able to reliably disentangle discrete factors of variation. We propose a two-level hierarchical objective to control relative degree of statistical independence between blocks of variables and individual variables within blocks. We derive this objective as a generalization of the evidence lower bound, which allows us to explicitly represent the trade-offs between mutual information between data and representation, KL divergence between representation and prior, and coverage of the support of the empirical data distribution. Experiments on a variety of datasets demonstrate that our objective can not only disentangle discrete variables, but that doing so also improves disentanglement of other variables and, importantly, generalization even to unseen combinations of factorsmore » « less