skip to main content


Title: BUQEYE guide to projection-based emulators in nuclear physics
The BUQEYE collaboration (Bayesian Uncertainty Quantification: Errors in Your effective field theory) presents a pedagogical introduction to projection-based, reduced-order emulators for applications in low-energy nuclear physics. The term emulator refers here to a fast surrogate model capable of reliably approximating high-fidelity models. As the general tools employed by these emulators are not yet well-known in the nuclear physics community, we discuss variational and Galerkin projection methods, emphasize the benefits of offline-online decompositions, and explore how these concepts lead to emulators for bound and scattering systems that enable fast and accurate calculations using many different model parameter sets. We also point to future extensions and applications of these emulators for nuclear physics, guided by the mature field of model (order) reduction. All examples discussed here and more are available as interactive, open-source Python code so that practitioners can readily adapt projection-based emulators for their own work.  more » « less
Award ID(s):
2209442 2004601
NSF-PAR ID:
10420991
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Frontiers in Physics
Volume:
10
ISSN:
2296-424X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The field of model order reduction (MOR) is growing in importance due to its ability to extract the key insights from complex simulations while discarding computationally burdensome and superfluous information. We provide an overview of MOR methods for the creation of fast & accurate emulators of memory- and compute-intensive nuclear systems, focusing on eigen-emulators and variational emulators. As an example, we describe how ‘eigenvector continuation’ is a special case of a much more general and well-studied MOR formalism for parameterized systems. We continue with an introduction to the Ritz and Galerkin projection methods that underpin many such emulators, while pointing to the relevant MOR theory and its successful applications along the way. We believe that this guide will open the door to broader applications in nuclear physics and facilitate communication with practitioners in other fields. 
    more » « less
  2. Abstract

    Quantitative predictions of natural and induced phenomena in fractured rock is one of the great challenges in the Earth and Energy Sciences with far‐reaching economic and environmental impacts. Fractures occupy a very small volume of a subsurface formation but often dominate fluid flow, solute transport and mechanical deformation behavior. They play a central role in CO2sequestration, nuclear waste disposal, hydrogen storage, geothermal energy production, nuclear nonproliferation, and hydrocarbon extraction. These applications require predictions of fracture‐dependent quantities of interest such as CO2leakage rate, hydrocarbon production, radionuclide plume migration, and seismicity; to be useful, these predictions must account for uncertainty inherent in subsurface systems. Here, we review recent advances in fractured rock research covering field‐ and laboratory‐scale experimentation, numerical simulations, and uncertainty quantification. We discuss how these have greatly improved the fundamental understanding of fractures and one's ability to predict flow and transport in fractured systems. Dedicated field sites provide quantitative measurements of fracture flow that can be used to identify dominant coupled processes and to validate models. Laboratory‐scale experiments fill critical knowledge gaps by providing direct observations and measurements of fracture geometry and flow under controlled conditions that cannot be obtained in the field. Physics‐based simulation of flow and transport provide a bridge in understanding between controlled simple laboratory experiments and the massively complex field‐scale fracture systems. Finally, we review the use of machine learning‐based emulators to rapidly investigate different fracture property scenarios and accelerate physics‐based models by orders of magnitude to enable uncertainty quantification and near real‐time analysis.

     
    more » « less
  3. Penning-trap mass spectrometry in atomic and nuclear physics has become a well-established and reliable tool for the determination of atomic masses. In combination with short-lived radioactive nuclides it was first introduced at ISOLTRAP at the Isotope Mass Separator On-Line facility (ISOLDE) at CERN. Penning traps have found new applications in coupling to other production mechanisms, such as in-flight production and separation systems. The applications in atomic and nuclear physics range from nuclear structure studies and related precision tests of theoretical approaches to description of the strong interaction to tests of the electroweak Standard Model, quantum electrodynamics and neutrino physics, and applications in nuclear astrophysics. The success of Penning-trap mass spectrometry is due to its precision and accuracy, even for low ion intensities (i.e., low production yields), as well as its very fast measurement cycle, enabling access to short-lived isotopes. The current reach in relative mass precision goes beyond δ m/ m=10 −8 , the half-life limit is as low as a few milliseconds, and the sensitivity is on the order of one ion per minute in the trap. We provide a comprehensive overview of the techniques and applications of Penning-trap mass spectrometry in nuclear and atomic physics. 
    more » « less
  4. Designing and/or controlling complex systems in science and engineering relies on appropriate mathematical modeling of systems dynamics. Classical differential equation based solutions in applied and computational mathematics are often computationally demanding. Recently, the connection between reduced-order models of high-dimensional differential equation systems and surrogate machine learning models has been explored. However, the focus of both existing reduced-order and machine learning models for complex systems has been how to best approximate the high fidelity model of choice. Due to high complexity and often limited training data to derive reduced-order or machine learning surrogate models, it is critical for derived reduced-order models to have reliable uncertainty quantification at the same time. In this paper, we propose such a novel framework of Bayesian reduced-order models naturally equipped with uncertainty quantification as it learns the distributions of the parameters of the reduced-order models instead of their point estimates. In particular, we develop learnable Bayesian proper orthogonal decomposition (BayPOD) that learns the distributions of both the POD projection bases and the mapping from the system input parameters to the projected scores/coefficients so that the learned BayPOD can help predict high-dimensional systems dynamics/fields as quantities of interest in different setups with reliable uncertainty estimates. The developed learnable BayPOD inherits the capability of embedding physics constraints when learning the POD-based surrogate reduced-order models, a desirable feature when studying complex systems in science and engineering applications where the available training data are limited. Furthermore, the proposed BayPOD method is an end-to-end solution, which unlike other surrogate-based methods, does not require separate POD and machine learning steps. The results from a real-world case study of the pressure field around an airfoil. 
    more » « less
  5. Bannister, Julie ; Mohanty, Nihar (Ed.)
    A method for automated creation and optimization of multistep etch recipes is presented. Here we demonstrate how an automated model-based process optimization approach can cut the cost and time of recipe creation by 75% or more as compared with traditional experimental design approaches. Underlying the success of the method are reduced-order physics-based models for simulating the process and performing subsequent analysis of the multi dimensional parameter space. SandBox Studio™ AI is used to automate the model selection, model calibration and subsequent process optimization. The process engineer is only required to provide the incoming stack and experimental measurements for model calibration and updates. The method is applied to the optimization of a channel etch for 3D NAND devices. A reduced-order model that captures the physics and chemistry of the multistep reaction is automatically selected and calibrated. A mirror AI model is simultaneously and automatically created to enable nearly instantaneous predictions across the large process space. The AI model is much faster to evaluate and is used to make a Quilt™, a 2D projection of etch performance in the multidimensional process parameter space. A Quilt™ process map is then used to automatically determine the optimal process window to achieve the target CDs. 
    more » « less