skip to main content


Title: Sparse Bayesian mass mapping with uncertainties: local credible intervals
ABSTRACT

Until recently, mass-mapping techniques for weak gravitational lensing convergence reconstruction have lacked a principled statistical framework upon which to quantify reconstruction uncertainties, without making strong assumptions of Gaussianity. In previous work, we presented a sparse hierarchical Bayesian formalism for convergence reconstruction that addresses this shortcoming. Here, we draw on the concept of local credible intervals (cf. Bayesian error bars) as an extension of the uncertainty quantification techniques previously detailed. These uncertainty quantification techniques are benchmarked against those recovered via Px-MALA – a state-of-the-art proximal Markov chain Monte Carlo (MCMC) algorithm. We find that, typically, our recovered uncertainties are everywhere conservative (never underestimate the uncertainty, yet the approximation error is bounded above), of similar magnitude and highly correlated with those recovered via Px-MALA. Moreover, we demonstrate an increase in computational efficiency of $\mathcal {O}(10^6)$ when using our sparse Bayesian approach over MCMC techniques. This computational saving is critical for the application of Bayesian uncertainty quantification to large-scale stage IV surveys such as LSST and Euclid.

 
more » « less
NSF-PAR ID:
10129388
Author(s) / Creator(s):
 ;  ;  ;  ;  ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
492
Issue:
1
ISSN:
0035-8711
Page Range / eLocation ID:
p. 394-404
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Parameters in climate models are usually calibrated manually, exploiting only small subsets of the available data. This precludes both optimal calibration and quantification of uncertainties. Traditional Bayesian calibration methods that allow uncertainty quantification are too expensive for climate models; they are also not robust in the presence of internal climate variability. For example, Markov chain Monte Carlo (MCMC) methods typically requiremodel runs and are sensitive to internal variability noise, rendering them infeasible for climate models. Here we demonstrate an approach to model calibration and uncertainty quantification that requires onlymodel runs and can accommodate internal climate variability. The approach consists of three stages: (a) a calibration stage uses variants of ensemble Kalman inversion to calibrate a model by minimizing mismatches between model and data statistics; (b) an emulation stage emulates the parameter‐to‐data map with Gaussian processes (GP), using the model runs in the calibration stage for training; (c) a sampling stage approximates the Bayesian posterior distributions by sampling the GP emulator with MCMC. We demonstrate the feasibility and computational efficiency of this calibrate‐emulate‐sample (CES) approach in a perfect‐model setting. Using an idealized general circulation model, we estimate parameters in a simple convection scheme from synthetic data generated with the model. The CES approach generates probability distributions of the parameters that are good approximations of the Bayesian posteriors, at a fraction of the computational cost usually required to obtain them. Sampling from this approximate posterior allows the generation of climate predictions with quantified parametric uncertainties.

     
    more » « less
  2. ABSTRACT

    Recovering credible cosmological parameter constraints in a weak lensing shear analysis requires an accurate model that can be used to marginalize over nuisance parameters describing potential sources of systematic uncertainty, such as the uncertainties on the sample redshift distribution n(z). Due to the challenge of running Markov chain Monte Carlo (MCMC) in the high-dimensional parameter spaces in which the n(z) uncertainties may be parametrized, it is common practice to simplify the n(z) parametrization or combine MCMC chains that each have a fixed n(z) resampled from the n(z) uncertainties. In this work, we propose a statistically principled Bayesian resampling approach for marginalizing over the n(z) uncertainty using multiple MCMC chains. We self-consistently compare the new method to existing ones from the literature in the context of a forecasted cosmic shear analysis for the HSC three-year shape catalogue, and find that these methods recover statistically consistent error bars for the cosmological parameter constraints for predicted HSC three-year analysis, implying that using the most computationally efficient of the approaches is appropriate. However, we find that for data sets with the constraining power of the full HSC survey data set (and, by implication, those upcoming surveys with even tighter constraints), the choice of method for marginalizing over n(z) uncertainty among the several methods from the literature may modify the 1σ uncertainties on Ωm–S8 constraints by ∼4 per cent, and a careful model selection is needed to ensure credible parameter intervals.

     
    more » « less
  3. Discovering interaction effects on a response of interest is a fundamental problem faced in biology, medicine, economics, and many other scientific disciplines. In theory, Bayesian methods for discovering pairwise interactions enjoy many benefits such as coherent uncertainty quantification, the ability to incorporate background knowledge, and desirable shrinkage properties. In practice, however, Bayesian methods are often computationally intractable for even moderate- dimensional problems. Our key insight is that many hierarchical models of practical interest admit a Gaussian process representation such that rather than maintaining a posterior over all O(p^2) interactions, we need only maintain a vector of O(p) kernel hyper-parameters. This implicit representation allows us to run Markov chain Monte Carlo (MCMC) over model hyper-parameters in time and memory linear in p per iteration. We focus on sparsity-inducing models and show on datasets with a variety of covariate behaviors that our method: (1) reduces runtime by orders of magnitude over naive applications of MCMC, (2) provides lower Type I and Type II error relative to state-of-the-art LASSO-based approaches, and (3) offers improved computational scaling in high dimensions relative to existing Bayesian and LASSO-based approaches. 
    more » « less
  4. Many Markov Chain Monte Carlo (MCMC) methods leverage gradient information of the potential function of target distribution to explore sample space efficiently. However, computing gradients can often be computationally expensive for large scale applications, such as those in contemporary machine learning. Stochastic Gradient (SG-)MCMC methods approximate gradients by stochastic ones, commonly via uniformly subsampled data points, and achieve improved computational efficiency, however at the price of introducing sampling error. We propose a non-uniform subsampling scheme to improve the sampling accuracy. The proposed exponentially weighted stochastic gradient (EWSG) is designed so that a non-uniform-SG-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method, and hence the inaccuracy due to SG approximation is reduced. EWSG differs from classical variance reduction (VR) techniques as it focuses on the entire distribution instead of just the variance; nevertheless, its reduced local variance is also proved. EWSG can also be viewed as an extension of the importance sampling idea, successful for stochastic-gradient-based optimizations, to sampling tasks. In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index, which is coupled to the MCMC algorithm. Numerical experiments are provided, not only to demonstrate EWSG's effectiveness, but also to guide hyperparameter choices, and validate our non-asymptotic global error bound despite of approximations in the implementation. Notably, while statistical accuracy is improved, convergence speed can be comparable to the uniform version, which renders EWSG a practical alternative to VR (but EWSG and VR can be combined too).

     
    more » « less
  5. Abstract

    Uncertainty quantification of groundwater (GW) aquifer parameters is critical for efficient management and sustainable extraction of GW resources. These uncertainties are introduced by the data, model, and prior information on the parameters. Here, we develop a Bayesian inversion framework that uses Interferometric Synthetic Aperture Radar (InSAR) surface deformation data to infer the laterally heterogeneous permeability of a transient linear poroelastic model of a confined GW aquifer. The Bayesian solution of this inverse problem takes the form of a posterior probability density of the permeability. Exploring this posterior using classical Markov chain Monte Carlo (MCMC) methods is computationally prohibitive due to the large dimension of the discretized permeability field and the expense of solving the poroelastic forward problem. However, in many partial differential equation (PDE)‐based Bayesian inversion problems, the data are only informative in a few directions in parameter space. For the poroelasticity problem, we prove this property theoretically for a one‐dimensional problem and demonstrate it numerically for a three‐dimensional aquifer model. We design a generalized preconditioned Crank‐Nicolson (gpCN) MCMC method that exploits this intrinsic low dimensionality by using a low‐rank‐based Laplace approximation of the posterior as a proposal, which we build scalably. The feasibility of our approach is demonstrated through a real GW aquifer test in Nevada. The inherently two‐dimensional nature of InSAR surface deformation data informs a sufficient number of modes of the permeability field to allow detection of major structures within the aquifer, significantly reducing the uncertainty in the pressure and the displacement quantities of interest.

     
    more » « less