skip to main content

Title: Conjugate Energy-Based Models
In this paper, we propose conjugate energy-based models (CEBMs), a new class of energy-based models that define a joint density over data and latent variables. The joint density of a CEBM decomposes into an intractable distribution over data and a tractable posterior over latent variables. CEBMs have similar use cases as variational autoencoders, in the sense that they learn an unsupervised mapping from data to latent variables. However, these models omit a generator network, which allows them to learn more flexible notions of similarity between data points. Our experiments demonstrate that conjugate EBMs achieve competitive results in terms of image modelling, predictive power of latent space, and out-of-domain detection on a variety of datasets.
Authors:
; ; ;
Editors:
Meila, M.; Zhang, T.
Award ID(s):
1835309
Publication Date:
NSF-PAR ID:
10280413
Journal Name:
Proceedings of the 38th International Conference on Machine Learning
Volume:
139
Page Range or eLocation-ID:
11228--11239
Sponsoring Org:
National Science Foundation
More Like this
  1. Learning causal structure from observational data has attracted much attention,and it is notoriously challenging to find the underlying structure in the presenceof confounders (hidden direct common causes of two variables). In this paper,by properly leveraging the non-Gaussianity of the data, we propose to estimatethe structure over latent variables with the so-called Triad constraints: we design a form of "pseudo-residual" from three variables, and show that when causal relations are linear and noise terms are non-Gaussian, the causal direction between the latent variables for the three observed variables is identifiable by checking a certain kind of independence relationship. In other words, the Triad constraints help us to locate latent confounders and determine the causal direction between them. This goes far beyond the Tetrad constraints and reveals more information about the underlying structure from non-Gaussian data. Finally, based on the Triad constraints, we develop a two-step algorithm to learn the causal structure corresponding to measurement models. Experimental results on both synthetic and real data demonstrate the effectiveness and reliability of our method.
  2. ABSTRACT

    While conventional Type Ia supernova (SN Ia) cosmology analyses rely primarily on rest-frame optical light curves to determine distances, SNe Ia are excellent standard candles in near-infrared (NIR) light, which is significantly less sensitive to dust extinction. An SN Ia spectral energy distribution (SED) model capable of fitting rest-frame NIR observations is necessary to fully leverage current and future SN Ia data sets from ground- and space-based telescopes including HST, LSST, JWST, and RST. We construct a hierarchical Bayesian model for SN Ia SEDs, continuous over time and wavelength, from the optical to NIR (B through H, or $0.35{-}1.8\, \mu$m). We model the SED as a combination of physically distinct host galaxy dust and intrinsic spectral components. The distribution of intrinsic SEDs over time and wavelength is modelled with probabilistic functional principal components and the covariance of residual functions. We train the model on a nearby sample of 79 SNe Ia with joint optical and NIR light curves by sampling the global posterior distribution over dust and intrinsic latent variables, SED components and population hyperparameters. Photometric distances of SNe Ia with NIR data near maximum obtain a total RMS error of 0.10 mag with our BayeSN model, compared tomore »0.13–0.14 mag with SALT2 and SNooPy for the same sample. Jointly fitting the optical and NIR data of the full sample up to moderate reddening (host E(B − V) < 0.4) for a global host dust law, we find RV = 2.9 ± 0.2, consistent with the Milky Way average.

    « less
  3. SUMMARY

    A fast algorithm for the large-scale joint inversion of gravity and magnetic data is developed. The algorithm uses a non-linear Gramian constraint to impose correlation between the density and susceptibility of the reconstructed models. The global objective function is formulated in the space of the weighted parameters, but the Gramian constraint is implemented in the original space, and the non-linear constraint is imposed using two separate Lagrange parameters, one for each model domain. It is significant that this combined approach, using the two spaces provides more similarity between the reconstructed models. Moreover, it is shown theoretically that the gradient for the use of the unweighted space is not a scalar multiple of that used for the weighted space, and hence cannot be accounted for by adjusting the Lagrange parameters. It is assumed that the measured data are obtained on a uniform grid and that a consistent regular discretization of the volume domain is imposed. Then, the sensitivity matrices exhibit a block-Toeplitz-Toeplitz-block structure for each depth layer of the model domain, and both forward and transpose operations with the matrices can be implemented efficiently using two dimensional fast Fourier transforms. This makes it feasible to solve for large scale problemsmore »with respect to both computational costs and memory demands, and to solve the non-linear problem by applying iterative methods that rely only on matrix–vector multiplications. As such, the use of the regularized reweighted conjugate gradient algorithm, in conjunction with the structure of the sensitivity matrices, leads to a fast methodology for large-scale joint inversion of geophysical data sets. Numerical simulations demonstrate that it is possible to apply a non-linear joint inversion algorithm, with Lp-norm stabilisers, for the reconstruction of large model domains on a standard laptop computer. It is demonstrated, that while the p = 1 choice provides sparse reconstructed solutions with sharp boundaries, it is also possible to use p = 2 in order to provide smooth and blurred models. The methodology is used for inverting gravity and magnetic data obtained over an area in northwest of Mesoproterozoic St Francois Terrane, southeast of Missouri, USA.

    « less
  4. Normalizing flows map an independent set of latent variables to their samples using a bijective transformation. Despite the exact correspondence between samples and latent variables, their high level relationship is not well understood. In this paper we characterize the geometric structure of flows using principal manifolds and understand the relationship between latent variables and samples using contours. We introduce a novel class of normalizing flows, called principal component flows (PCF), whose contours are its principal manifolds, and a variant for injective flows (iPCF) that is more efficient to train than regular injective flows. PCFs can be constructed using any flow architecture, are trained with a regularized maximum likelihood objective and can perform density estimation on all of their principal manifolds. In our experiments we show that PCFs and iPCFs are able to learn the principal manifolds over a variety of datasets. Additionally, we show that PCFs can perform density estimation on data that lie on a manifold with variable dimensionality, which is not possible with existing normalizing flows.
  5. We propose an explainable approach for relation extraction that mitigates the tension between generalization and explainability by jointly training for the two goals. Our approach uses a multi-task learning architecture, which jointly trains a classifier for relation extraction, and a sequence model that labels words in the context of the relation that explain the decisions of the relation classifier. We also convert the model outputs to rules to bring global explanations to this approach. This sequence model is trained using a hybrid strategy: supervised, when supervision from pre-existing patterns is available, and semi-supervised otherwise. In the latter situation, we treat the sequence model’s labels as latent variables, and learn the best assignment that maximizes the performance of the relation classifier. We evaluate the proposed approach on the two datasets and show that the sequence model provides labels that serve as accurate explanations for the relation classifier’s decisions, and, importantly, that the joint training generally improves the performance of the relation classifier. We also evaluate the performance of the generated rules and show that the new rules are great add-on to the manual rules and bring the rule-based system much closer to the neural models.