Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract We use two different methods, Monte Carlo sampling and variational inference (VI), to perform a Bayesian calibration of the effective-range parameters in3He–4He elastic scattering. The parameters are calibrated to data from a recent set of3He–4He elastic scattering differential cross section measurements. Analysis of these data forElab≤ 4.3 MeV yields a unimodal posterior for which both methods obtain the same structure. However, the effective-range expansion amplitude does not account for the 7/2−state of7Be so, even after calibration, the description of data at the upper end of this energy range is poor. The data up toElab = 2.6 MeV can be well described, but calibration to this lower-energy subset of the data yields a bimodal posterior. After adapting VI to treat such a multi-modal posterior we find good agreement between the VI results and those obtained with parallel-tempered Monte Carlo sampling.more » « lessFree, publicly-accessible full text available December 17, 2025
-
Abstract To improve the predictability of complex computational models in the experimentally-unknown domains, we propose a Bayesian statistical machine learning framework utilizing the Dirichlet distribution that combines results of several imperfect models. This framework can be viewed as an extension of Bayesian stacking. To illustrate the method, we study the ability of Bayesian model averaging and mixing techniques to mine nuclear masses. We show that the global and local mixtures of models reach excellent performance on both prediction accuracy and uncertainty quantification and are preferable to classical Bayesian model averaging. Additionally, our statistical analysis indicates that improving model predictions through mixing rather than mixing of corrected models leads to more robust extrapolations.more » « less
-
Free, publicly-accessible full text available May 1, 2026
-
Free, publicly-accessible full text available March 1, 2026
-
Free, publicly-accessible full text available January 1, 2026
-
Free, publicly-accessible full text available December 1, 2025
-
One can improve predictability in the unknown domain by combining forecasts of imperfect complex computational models using a Bayesian statistical machine learning framework. In many cases, however, the models used in the mixing process are similar. In addition to contaminating the model space, the existence of such similar, or even redundant, models during the multimodeling process can result in misinterpretation of results and deterioration of predictive performance. In this paper we describe a method based on the principal component analysis that eliminates model redundancy. We show that by adding model orthogonalization to the proposed Bayesian model combination framework, one can arrive at better prediction accuracy and reach excellent uncertainty quantification performance. Published by the American Physical Society2024more » « lessFree, publicly-accessible full text available September 1, 2025