skip to main content

Title: Machine Learning for Conservative-to-Primitive in Relativistic Hydrodynamics
The numerical solution of relativistic hydrodynamics equations in conservative form requires root-finding algorithms that invert the conservative-to-primitive variables map. These algorithms employ the equation of state of the fluid and can be computationally demanding for applications involving sophisticated microphysics models, such as those required to calculate accurate gravitational wave signals in numerical relativity simulations of binary neutron stars. This work explores the use of machine learning methods to speed up the recovery of primitives in relativistic hydrodynamics. Artificial neural networks are trained to replace either the interpolations of a tabulated equation of state or directly the conservative-to-primitive map. The application of these neural networks to simple benchmark problems shows that both approaches improve over traditional root finders with tabular equation-of-state and multi-dimensional interpolations. In particular, the neural networks for the conservative-to-primitive map accelerate the variable recovery by more than an order of magnitude over standard methods while maintaining accuracy. Neural networks are thus an interesting option to improve the speed and robustness of relativistic hydrodynamics algorithms.
Authors:
; ; ;
Award ID(s):
2020275 2011725 2116686
Publication Date:
NSF-PAR ID:
10331438
Journal Name:
Symmetry
Volume:
13
Issue:
11
Page Range or eLocation-ID:
2157
ISSN:
2073-8994
Sponsoring Org:
National Science Foundation
More Like this
  1. Inundation mapping is a critical task for damage assessment, emergency management, and prioritizing relief efforts during a flooding event. Remote sensing has been an effective tool for interpreting and analyzing water bodies and detecting floods over the past decades. In recent years, deep learning algorithms such as convolutional neural networks (CNNs) have demonstrated promising performance for remote sensing image classification for many applications, including inundation mapping. Unlike conventional algorithms, deep learning can learn features automatically from large datasets. This research aims to compare and investigate the performance of two state-of-the-art methods for 3D inundation mapping: a deep learning-based image analysis and a Geomorphic Flood Index (GFI). The first method, deep learning image analysis involves three steps: 1) image classification to delineate flood boundaries, 2) integrate the flood boundaries and topography data to create a three-dimensional (3D) water surface, and 3) compare the 3D water surface with pre-flood topography to estimate floodwater depth. The second method, i.e., GFI, involves three phases: 1) calculate a river basin morphological information, such as river height (hr) and elevation difference (H), 2) calibrate and measure GFI to delineate flood boundaries, and 3) calculate the coefficient parameter ( α ), and correct the value of hrmore »to estimate inundation depth. The methods were implemented to generate 3D inundation maps over Princeville, North Carolina, United States during hurricane Matthew in 2016. The deep learning method demonstrated better performance with a root mean square error (RMSE) of 0.26 m for water depth. It also achieved about 98% in delineating the flood boundaries using UAV imagery. This approach is efficient in extracting and creating a 3D flood extent map at a different scale to support emergency response and recovery activities during a flood event.« less
  2. The Sylvester equation offers a powerful and unifying primitive for a variety of important graph mining tasks, including network alignment, graph kernel, node similarity, subgraph matching, etc. A major bottleneck of Sylvester equation lies in its high computational complexity. Despite tremendous effort, state-of-the-art methods still require a complexity that is at least \em quadratic in the number of nodes of graphs, even with approximations. In this paper, we propose a family of Krylov subspace based algorithms (\fasten) to speed up and scale up the computation of Sylvester equation for graph mining. The key idea of the proposed methods is to project the original equivalent linear system onto a Kronecker Krylov subspace. We further exploit (1) the implicit representation of the solution matrix as well as the associated computation, and (2) the decomposition of the original Sylvester equation into a set of inter-correlated Sylvester equations of smaller size. The proposed algorithms bear two distinctive features. First, they provide the \em exact solutions without any approximation error. Second, they significantly reduce the time and space complexity for solving Sylvester equation, with two of the proposed algorithms having a \em linear complexity in both time and space. Experimental evaluations on a diverse setmore »of real networks, demonstrate that our methods (1) are up to $10,000\times$ faster against Conjugate Gradient method, the best known competitor that outputs the exact solution, and (2) scale up to million-node graphs.« less
  3. In many mechanistic medical, biological, physical, and engineered spatiotemporal dynamic models the numerical solution of partial differential equations (PDEs), especially for diffusion, fluid flow and mechanical relaxation, can make simulations impractically slow. Biological models of tissues and organs often require the simultaneous calculation of the spatial variation of concentration of dozens of diffusing chemical species. One clinical example where rapid calculation of a diffusing field is of use is the estimation of oxygen gradients in the retina, based on imaging of the retinal vasculature, to guide surgical interventions in diabetic retinopathy. Furthermore, the ability to predict blood perfusion and oxygenation may one day guide clinical interventions in diverse settings, i.e., from stent placement in treating heart disease to BOLD fMRI interpretation in evaluating cognitive function (Xie et al., 2019 ; Lee et al., 2020 ). Since the quasi-steady-state solutions required for fast-diffusing chemical species like oxygen are particularly computationally costly, we consider the use of a neural network to provide an approximate solution to the steady-state diffusion equation. Machine learning surrogates, neural networks trained to provide approximate solutions to such complicated numerical problems, can often provide speed-ups of several orders of magnitude compared to direct calculation. Surrogates of PDEs couldmore »enable use of larger and more detailed models than are possible with direct calculation and can make including such simulations in real-time or near-real time workflows practical. Creating a surrogate requires running the direct calculation tens of thousands of times to generate training data and then training the neural network, both of which are computationally expensive. Often the practical applications of such models require thousands to millions of replica simulations, for example for parameter identification and uncertainty quantification, each of which gains speed from surrogate use and rapidly recovers the up-front costs of surrogate generation. We use a Convolutional Neural Network to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources located at random positions in a two-dimensional square domain with absorbing boundary conditions. Such a configuration caricatures the chemical concentration field of a fast-diffusing species like oxygen in a tissue with two parallel blood vessels in a cross section perpendicular to the two blood vessels. To improve convergence during training, we apply a training approach that uses roll-back to reject stochastic changes to the network that increase the loss function. The trained neural network approximation is about 1000 times faster than the direct calculation for individual replicas. Because different applications will have different criteria for acceptable approximation accuracy, we discuss a variety of loss functions and accuracy estimators that can help select the best network for a particular application. We briefly discuss some of the issues we encountered with overfitting, mismapping of the field values and the geometrical conditions that lead to large absolute and relative errors in the approximate solution.« less
  4. ABSTRACT

    We investigate r-process nucleosynthesis and kilonova emission resulting from binary neutron star (BNS) mergers based on a three-dimensional (3D) general-relativistic magnetohydrodynamic (GRMHD) simulation of a hypermassive neutron star (HMNS) remnant. The simulation includes a microphysical finite-temperature equation of state (EOS) and neutrino emission and absorption effects via a leakage scheme. We track the thermodynamic properties of the ejecta using Lagrangian tracer particles and determine its composition using the nuclear reaction network SkyNet. We investigate the impact of neutrinos on the nucleosynthetic yields by varying the neutrino luminosities during post-processing. The ejecta show a broad distribution with respect to their electron fraction Ye, peaking between ∼0.25–0.4 depending on the neutrino luminosity employed. We find that the resulting r-process abundance patterns differ from solar, with no significant production of material beyond the second r-process peak when using luminosities recorded by the tracer particles. We also map the HMNS outflows to the radiation hydrodynamics code SNEC and predict the evolution of the bolometric luminosity as well as broadband light curves of the kilonova. The bolometric light curve peaks on the timescale of a day and the brightest emission is seen in the infrared bands. This is the first direct calculation of the r-processmore »yields and kilonova signal expected from HMNS winds based on 3D GRMHD simulations. For longer-lived remnants, these winds may be the dominant ejecta component producing the kilonova emission.

    « less
  5. A novel green granular neural network (GGNN) with new fast software-FPGA co-designed learning is developed to reduce both CO2 emissions and energy consumption more effectively than popular neural networks with the traditional software-CPU-GPU-based learning. Different from traditional tedious CPU-GPU-based training algorithms using gradient descent methods and other methods such as genetic algorithms , the software-FPGA co-designed training algorithm may quickly solve a system of linear equations to directly calculate optimal values of hyperparameters of the GGNN. Initial simulation results indicates that the FPGA equation solver code ran faster than the Python equation solver code. Therefore, implementing the GGNN with software-FPGA co-designed learning is feasible. In addition, the shallow high-speed GGNN is explainable because it can generate interpretable granular If-Then rules. In the future, The GGNN will be evaluated by comparing with other machine learning models with traditional software-based learning in terms of speeds, model sizes, accuracy, CO2 emissions and energy consumption by using popular datasets. New algorithms will be created to divide the inputs to different input groups that will be used to build different small-size GGNNs to solve the curse of dimensionality. Additionally, the explainable green granular convolutional neural network will be developed by using the GGNNs as basicmore »building blocks to efficiently solve image recognition problems.« less