skip to main content

Title: Study of Jupiter’s Interior with Quadratic Monte Carlo Simulations

We construct models for Jupiter’s interior that match the gravity data obtained by the Juno and Galileo spacecraft. To generate ensembles of models, we introduce a novelquadraticMonte Carlo technique, which is more efficient in confining fitness landscapes than the affine invariant method that relies on linear stretch moves. We compare how long it takes the ensembles of walkers in both methods to travel to the most relevant parameter region. Once there, we compare the autocorrelation time and error bars of the two methods. For a ring potential and the 2d Rosenbrock function, we find that our quadratic Monte Carlo technique is significantly more efficient. Furthermore, we modified thewalkmoves by adding a scaling factor. We provide the source code and examples so that this method can be applied elsewhere. Here we employ our method to generate five-layer models for Jupiter’s interior that include winds and a prominent dilute core, which allows us to match the planet’s even and odd gravity harmonics. We compare predictions from the different model ensembles and analyze how much an increase in the temperature at 1 bar and ad hoc change to the equation of state affect the inferred amount of heavy elements in the atmosphere and in the planet overall.

more » « less
Award ID(s):
Author(s) / Creator(s):
Publisher / Repository:
DOI PREFIX: 10.3847
Date Published:
Journal Name:
The Astrophysical Journal
Medium: X Size: Article No. 111
["Article No. 111"]
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a novel technique for tailoring Bayesian quadrature (BQ) to model selection. The state-of-the-art for comparing the evidence of multiple models relies on Monte Carlo methods, which converge slowly and are unreliable for computationally expensive models. Although previous research has shown that BQ offers sample efficiency superior to Monte Carlo in computing the evidence of an individual model, applying BQ directly to model comparison may waste computation producing an overly-accurate estimate for the evidence of a clearly poor model. We propose an automated and efficient algorithm for computing the most-relevant quantity for model selection: the posterior model probability. Our technique maximizes the mutual information between this quantity and observations of the models’ likelihoods, yielding efficient sample acquisition across disparate model spaces when likelihood observations are limited. Our method produces more-accurate posterior estimates using fewer likelihood evaluations than standard Bayesian quadrature and Monte Carlo estimators, as we demonstrate on synthetic and real-world examples. 
    more » « less

    We calculate the fundamental stellar parameters effective temperature, surface gravity, and iron abundance – Teff, log g, [Fe/H] – for the final release of the Mapping Nearby Galaxies at APO (MaNGA) Stellar Library (MaStar), containing 59 266 per-visit-spectra for 24 290 unique stars at intermediate resolution (R ∼ 1800) and high S/N (median = 96). We fit theoretical spectra from model atmospheres by both MARCS and BOSZ-ATLAS9 to the observed MaStar spectra, using the full spectral fitting code pPXF. We further employ a Bayesian approach, using a Markov Chain Monte Carlo (MCMC) technique to map the parameter space and obtain uncertainties. Originally in this paper, we cross match MaStar observations with Gaia photometry, which enable us to set reliable priors and identify outliers according to stellar evolution. In parallel to the parameter determination, we calculate corresponding stellar population models to test the reliability of the parameters for each stellar evolutionary phase. We further assess our procedure by determining parameters for standard stars such as the Sun and Vega and by comparing our parameters with those determined in the literature from high-resolution spectroscopy (APOGEE and SEGUE) and from lower resolution matching template (LAMOST). The comparisons, considering the different methodologies and S/N of the literature surveys, are favourable in all cases. Our final parameter catalogue for MaStar cover the following ranges: 2592 ≤ Teff ≤ 32 983 K; −0.7 ≤ log g ≤ 5.4 dex; −2.9 ≤ [Fe/H] ≤ 1.0 dex and will be available with the last SDSS-IV Data Release, in 2021 December.

    more » « less
  3. Abstract

    Species distribution models (SDMs) are frequently used to predict the effects of climate change on species of conservation concern. Biases inherent in the process of constructing SDMs and transferring them to new climate scenarios may result in undesirable conservation outcomes. We explore these issues and demonstrate new methods to estimate biases induced by the design of SDM studies. We present these methods in the context of estimating the effects of climate change on Australia's only endemic Pokémon.

    Using a citizen science dataset, we build species distribution models forGarura kangaskhanito predict the effects of climate change on the suitability of habitat for the species. We demonstrate a novel Monte Carlo procedure for estimating the biases implicit in a given study design, and compare the results seen for Pokémon to those seen from our Monte Carlo tests as well as previous studies in the same region using both simulated and real data.

    Our models suggest that climate change will impact the suitability of habitat forG. kangaskhani, which may compound the effects of threats such as habitat loss and their use in blood sport. However, we also find that using SDMs to estimate the effects of climate change can be accompanied by biases so strong that the data themselves have minimal impact on modelling outcomes.

    We show that the direction and magnitude of bias in estimates of climate change impacts are affected by every aspect of the modelling process, and suggest that bias estimates should be included in future studies of this type. Given the widespread use of SDMs, systemic biases could have substantial financial and opportunity costs. By demonstrating these biases and presenting a novel statistical tool to estimate them, we hope to provide a more secure future forG. kangaskhaniand the rest of the world's biodiversity.

    more » « less
  4. Abstract

    In this paper, we propose a sparse Bayesian procedure with global and local (GL) shrinkage priors for the problems of variable selection and classification in high‐dimensional logistic regression models. In particular, we consider two types of GL shrinkage priors for the regression coefficients, the horseshoe (HS) prior and the normal‐gamma (NG) prior, and then specify a correlated prior for the binary vector to distinguish models with the same size. The GL priors are then combined with mixture representations of logistic distribution to construct a hierarchical Bayes model that allows efficient implementation of a Markov chain Monte Carlo (MCMC) to generate samples from posterior distribution. We carry out simulations to compare the finite sample performances of the proposed Bayesian method with the existing Bayesian methods in terms of the accuracy of variable selection and prediction. Finally, two real‐data applications are provided for illustrative purposes.

    more » « less
  5. Abstract

    Improved efficiency of Markov chain Monte Carlo facilitates all aspects of statistical analysis with Bayesian hierarchical models. Identifying strategies to improve MCMC performance is becoming increasingly crucial as the complexity of models, and the run times to fit them, increases. We evaluate different strategies for improving MCMC efficiency using the open‐source software NIMBLE (R package nimble) using common ecological models of species occurrence and abundance as examples. We ask how MCMC efficiency depends on model formulation, model size, data, and sampling strategy. For multiseason and/or multispecies occupancy models and for N‐mixture models, we compare the efficiency of sampling discrete latent states vs. integrating over them, including more vs. fewer hierarchical model components, and univariate vs. block‐sampling methods. We include the common MCMC tool JAGS in comparisons. For simple models, there is little practical difference between computational approaches. As model complexity increases, there are strong interactions between model formulation and sampling strategy on MCMC efficiency. There is no one‐size‐fits‐all best strategy, but rather problem‐specific best strategies related to model structure and type. In all but the simplest cases, NIMBLE's default or customized performance achieves much higher efficiency than JAGS. In the two most complex examples, NIMBLE was 10–12 times more efficient than JAGS. We find NIMBLE is a valuable tool for many ecologists utilizing Bayesian inference, particularly for complex models where JAGS is prohibitively slow. Our results highlight the need for more guidelines and customizable approaches to fit hierarchical models to ensure practitioners can make the most of occupancy and other hierarchical models. By implementing model‐generic MCMC procedures in open‐source software, including the NIMBLE extensions for integrating over latent states (implemented in the R package nimbleEcology), we have made progress toward this aim.

    more » « less