skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Rapid Model Comparison by Amortizing Across Models
Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima. To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture. Our Any Parameter Encoder (APE) extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input. APE thus reduces posterior inference across unseen data and models to a single forward pass. In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion.  more » « less
Award ID(s):
1908617
PAR ID:
10187129
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of The 2nd Symposium on Advances in Approximate Bayesian Inference
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Comparing the inferences of diverse candidate models is an essential part of model checking and escaping local optima. To enable efficient comparison, we introduce an amortized variational inference framework that can perform fast and reliable posterior estimation across models of the same architecture. Our Any Parameter Encoder (APE) extends the encoder neural network common in amortized inference to take both a data feature vector and a model parameter vector as input. APE thus reduces posterior inference across unseen data and models to a single forward pass. In experiments comparing candidate topic models for synthetic data and product reviews, our Any Parameter Encoder yields comparable posteriors to more expensive methods in far less time, especially when the encoder architecture is designed in model-aware fashion. 
    more » « less
  2. ABSTRACT Estimating parameters and their credible intervals for complex system dynamics models is challenging but critical to continuous model improvement and reliable communication with an increasing fraction of audiences. The purpose of this study is to integrate Amortized Bayesian Inference (ABI) methods with system dynamics. Utilizing Neural Posterior Estimation (NPE), we train neural networks using synthetic data (pairs of ground truth parameters and outcome time series) to estimate parameters of system dynamics models. We apply this method to two example models: a simple Random Walk model and a moderately complex SEIRb model. We show that the trained neural networks can output the posterior for parameters instantly given new unseen time series data. Our analysis highlights the potential of ABI to facilitate a principled, scalable, and likelihood‐free inference workflow that enhance the integration of models of complex systems with data. Accompanying code streamlines application to diverse system dynamics models. 
    more » « less
  3. A key advance in learning generative models is the use of amortized inference distributions that are jointly trained with the models. We find that existing training objectives for variational autoencoders can lead to inaccurate amortized inference distributions and, in some cases, improving the objective provably degrades the inference quality. In addition, it has been observed that variational autoencoders tend to ignore the latent variables when combined with a decoding distribution that is too flexible. We again identify the cause in existing training criteria and propose a new class of objectives (Info-VAE) that mitigate these problems. We show that our model can significantly improve the quality of the variational posterior and can make effective use of the latent features regardless of the flexibility of the decoding distribution. Through extensive qualitative and quantitative analyses, we demonstrate that our models outperform competing approaches on multiple performance metrics 
    more » « less
  4. We pursue tractable Bayesian analysis of generalized linear models (GLMs) for categorical data. GLMs have been difficult to scale to more than a few dozen categories due to non-conjugacy or strong posterior dependencies when using conjugate auxiliary variable methods. We define a new class of GLMs for categorical data called categorical-from-binary (CB) models. Each CB model has a likelihood that is bounded by the product of binary likelihoods, suggesting a natural posterior approximation. This approximation makes inference straightforward and fast; using well-known auxiliary variables for probit or logistic regression, the product of binary models admits conjugate closed-form variational inference that is embarrassingly parallel across categories and invariant to category ordering. Moreover, an independent binary model simultaneously approximates multiple CB models. Bayesian model averaging over these can improve the quality of the approximation for any given dataset. We show that our approach scales to thousands of categories, outperforming posterior estimation competitors like Automatic Differentiation Variational Inference (ADVI) and No U-Turn Sampling (NUTS) in the time required to achieve fixed prediction quality. 
    more » « less
  5. null (Ed.)
    Linear mixed-effects models play a fundamental role in statistical methodology. A variety of Markov chain Monte Carlo (MCMC) algorithms exist for fitting these models, but they are inefficient in massive data settings because every iteration of any such MCMC algorithm passes through the full data. Many divide-and-conquer methods have been proposed to solve this problem, but they lack theoretical guarantees, impose restrictive assumptions, or have complex computational algorithms. Our focus is one such method called the Wasserstein Posterior (WASP), which has become popular due to its optimal theoretical properties under general assumptions. Unfortunately, practical implementation of the WASP either requires solving a complex linear program or is limited to one-dimensional parameters. The former method is inefficient and the latter method fails to capture the joint posterior dependence structure of multivariate parameters. We develop a new algorithm for computing the WASP of multivariate parameters that is easy to implement and is useful for computing the WASP in any model where the posterior distribution of parameter belongs to a location-scatter family of probability measures. The algorithm is introduced for linear mixed-effects models with both implementation details and theoretical properties. Our algorithm outperforms the current state-of-the-art method in inference on the functions of the covariance matrix of the random effects across diverse numerical comparisons. 
    more » « less