skip to main content


Title: Bilby -MCMC: an MCMC sampler for gravitational-wave inference
ABSTRACT We introduce Bilby-MCMC, a Markov chain Monte Carlo sampling algorithm tuned for the analysis of gravitational waves from merging compact objects. Bilby-MCMC provides a parallel-tempered ensemble Metropolis-Hastings sampler with access to a block-updating proposal library including problem-specific and machine learning proposals. We demonstrate that learning proposals can produce over a 10-fold improvement in efficiency by reducing the autocorrelation time. Using a variety of standard and problem-specific tests, we validate the ability of the Bilby-MCMC sampler to produce independent posterior samples and estimate the Bayesian evidence. Compared to the widely used Dynesty nested sampling algorithm, Bilby-MCMC is less efficient in producing independent posterior samples and less accurate in its estimation of the evidence. However, we find that posterior samples drawn from the Bilby-MCMC sampler are more robust: never failing to pass our validation tests. Meanwhile, the Dynesty sampler fails the difficult-to-sample Rosenbrock likelihood test, over constraining the posterior. For CBC problems, this highlights the importance of cross-sampler comparisons to ensure results are robust to sampling error. Finally, Bilby-MCMC can be embarrassingly and asynchronously parallelized making it highly suitable for reducing the analysis wall-time using a High Throughput Computing environment. Bilby-MCMC may be a useful tool for the rapid and robust analysis of gravitational-wave signals during the advanced detector era and we expect it to have utility throughout astrophysics.  more » « less
Award ID(s):
1912594
NSF-PAR ID:
10348528
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
507
Issue:
2
ISSN:
0035-8711
Page Range / eLocation ID:
2037 to 2051
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Computational hemodynamic modeling has been widely used in cardiovascular research and healthcare. However, the reliability of model predictions is largely dependent on the uncertainties of modeling parameters and boundary conditions, which should be carefully quantified and further reduced with available measurements. In this work, we focus on propagating and reducing the uncertainty of vascular geometries within a Bayesian framework. A novel deep learning (DL)-assisted parallel Markov chain Monte Carlo (MCMC) method is presented to enable efficient Bayesian posterior sampling and geometric uncertainty reduction. A DL model is built to approximate the geometry-to-hemodynamic map, which is trained actively using online data collected from parallel MCMC chains and utilized for early rejection of unlikely proposals to facilitate convergence with less expensive full-order model evaluations. Numerical studies on two-dimensional aortic flows are conducted to demonstrate the effectiveness and merit of the proposed method. 
    more » « less
  2. null (Ed.)
    Monte Carlo (MC) methods are widely used in many research areas such as physical simulation, statistical analysis, and machine learning. Application of MC methods requires drawing fast mixing samples from a given probability distribution. Among existing sampling methods, the Hamiltonian Monte Carlo (HMC) utilizes gradient information during Hamiltonian simulation and can produce fast mixing samples at the highest efficiency. However, without carefully chosen simulation parameters for a specific problem, HMC generally suffers from simulation locality and computation waste. As a result, the No-U-Turn Sampler (NUTS) has been proposed to automatically tune these parameters during simulation and is the current state-of-the-art sampling algorithm. However, application of NUTS requires frequent gradient calculation of a given distribution and high-volume vector processing, especially for large-scale problems, leading to drawing an expensively large number of samples and a desire of hardware acceleration. While some hardware acceleration works have been proposed for traditional Markov Chain Monte Carlo (MCMC) and HMC methods, there is no existing work targeting hardware acceleration of the NUTS algorithm. In this paper, we present the first NUTS accelerator on FPGA while addressing the high complexity of this state-of-the-art algorithm. Our hardware and algorithm co-optimizations include an incremental resampling technique which leads to a more memory efficient architecture and pipeline optimization for multi-chain sampling to maximize the throughput. We also explore three levels of parallelism in the NUTS accelerator to further boost performance. Compared with optimized C++ NUTS package: RSTAN, our NUTS accelerator can reach a maximum speedup of 50.6X and an energy improvement of 189.7X. 
    more » « less
  3. We consider Ising models on the hypercube with a general interaction matrix 𝐽, and give a polynomial time sampling algorithm when all but 𝑂(1) eigenvalues of 𝐽 lie in an interval of length one, a situation which occurs in many models of interest. This was previously known for the Glauber dynamics when \emph{all} eigenvalues fit in an interval of length one; however, a single outlier can force the Glauber dynamics to mix torpidly. Our general result implies the first polynomial time sampling algorithms for low-rank Ising models such as Hopfield networks with a fixed number of patterns and Bayesian clustering models with low-dimensional contexts, and greatly improves the polynomial time sampling regime for the antiferromagnetic/ferromagnetic Ising model with inconsistent field on expander graphs. It also improves on previous approximation algorithm results based on the naive mean-field approximation in variational methods and statistical physics. Our approach is based on a new fusion of ideas from the MCMC and variational inference worlds. As part of our algorithm, we define a new nonconvex variational problem which allows us to sample from an exponential reweighting of a distribution by a negative definite quadratic form, and show how to make this procedure provably efficient using stochastic gradient descent. On top of this, we construct a new simulated tempering chain (on an extended state space arising from the Hubbard-Stratonovich transform) which overcomes the obstacle posed by large positive eigenvalues, and combine it with the SGD-based sampler to solve the full problem. 
    more » « less
  4. null (Ed.)
    Markov Chain Monte Carlo (MCMC) methods sample from unnormalized probability distributions and offer guarantees of exact sampling. However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods. Augmenting samplers with neural networks can potentially improve their efficiency. Previous neural network-based samplers were trained with objectives that either did not explicitly encourage exploration, or contained a term that encouraged exploration but only for well structured distributions. Here we propose to maximize proposal entropy for adapting the proposal to distributions of any shape. To optimize proposal entropy directly, we devised a neural network MCMC sampler that has a flexible and tractable proposal distribution. Specifically, our network architecture utilizes the gradient of the target distribution for generating proposals. Our model achieved significantly higher efficiency than previous neural network MCMC techniques in a variety of sampling tasks, sometimes by more than an order magnitude. Further, the sampler was demonstrated through the training of a convergent energy-based model of natural images. The adaptive sampler achieved unbiased sampling with significantly higher proposal entropy than a Langevin dynamics sample. The trained sampler also achieved better sample quality. 
    more » « less
  5. Abstract In the problem of spotlight mode airborne synthetic aperture radar (SAR) image formation, it is well-known that data collected over a wide azimuthal angle violate the isotropic scattering property typically assumed. Many techniques have been proposed to account for this issue, including both full-aperture and sub-aperture methods based on filtering, regularized least squares, and Bayesian methods. A full-aperture method that uses a hierarchical Bayesian prior to incorporate appropriate speckle modeling and reduction was recently introduced to produce samples of the posterior density rather than a single image estimate. This uncertainty quantification information is more robust as it can generate a variety of statistics for the scene. As proposed, the method was not well-suited for large problems, however, as the sampling was inefficient. Moreover, the method was not explicitly designed to mitigate the effects of the faulty isotropic scattering assumption. In this work we therefore propose a new sub-aperture SAR imaging method that uses a sparse Bayesian learning-type algorithm to more efficiently produce approximate posterior densities for each sub-aperture window. These estimates may be useful in and of themselves, or when of interest, the statistics from these distributions can be combined to form a composite image. Furthermore, unlike the often-employed ℓ p -regularized least squares methods, no user-defined parameters are required. Application-specific adjustments are made to reduce the typically burdensome runtime and storage requirements so that appropriately large images can be generated. Finally, this paper focuses on incorporating these techniques into SAR image formation process, that is, for the problem starting with SAR phase history data, so that no additional processing errors are incurred. The advantage over existing SAR image formation methods are clearly presented with numerical experiments using real-world data. 
    more » « less