ABSTRACT We develop a method to compute synthetic kilonova light curves that combine numerical relativity simulations of neutron star mergers and the SNEC radiation–hydrodynamics code. We describe our implementation of initial and boundary conditions, r-process heating, and opacities for kilonova simulations. We validate our approach by carefully checking that energy conservation is satisfied and by comparing the SNEC results with those of two semi-analytic light-curve models. We apply our code to the calculation of colour light curves for three binaries having different mass ratios (equal and unequal mass) and different merger outcome (short-lived and long-lived remnants). We study the sensitivity of our results to hydrodynamic effects, nuclear physics uncertainties in the heating rates, and duration of the merger simulations. We find that hydrodynamics effects are typically negligible and that homologous expansion is a good approximation in most cases. However, pressure forces can amplify the impact of uncertainties in the radioactive heating rates. We also study the impact of shocks possibly launched into the outflows by a relativistic jet. None of our models match AT2017gfo, the kilonova in GW170817. This points to possible deficiencies in our merger simulations and kilonova models that neglect non-LTE effects and possible additional energy injection from the merger remnant and to the need to go beyond the assumption of spherical symmetry adopted in this work.
more »
« less
KilonovaNet : Surrogate models of kilonova spectra with conditional variational autoencoders
ABSTRACT Detailed radiative transfer simulations of kilonova spectra play an essential role in multimessenger astrophysics. Using the simulation results in parameter inference studies requires building a surrogate model from the simulation outputs to use in algorithms requiring sampling. In this work, we present kilonovanet, an implementation of conditional variational autoencoders (cVAEs) for the construction of surrogate models of kilonova spectra. This method can be trained on spectra directly, removing overhead time of pre-processing spectra, and greatly speeds up parameter inference time. We build surrogate models of three state-of-the-art kilonova simulation data sets and present in-depth surrogate error evaluation methods, which can in general be applied to any surrogate construction method. By creating synthetic photometric observations from the spectral surrogate, we perform parameter inference for the observed light-curve data of GW170817 and compare the results with previous analyses. Given the speed with which kilonovanet performs during parameter inference, it will serve as a useful tool in future gravitational wave observing runs to quickly analyse potential kilonova candidates.
more »
« less
- Award ID(s):
- 2122312
- PAR ID:
- 10353486
- Date Published:
- Journal Name:
- Monthly Notices of the Royal Astronomical Society
- Volume:
- 516
- Issue:
- 1
- ISSN:
- 0035-8711
- Page Range / eLocation ID:
- 1137 to 1148
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
While Bayesian inference is the gold standard for uncertainty quantification and propagation, its use within physical chemistry encounters formidable computational barriers. These bottlenecks are magnified for modeling data with many independent variables, such as X-ray/neutron scattering patterns and electromagnetic spectra. To address this challenge, we employ local Gaussian process (LGP) surrogate models to accelerate Bayesian optimization over these complex thermophysical properties. The time-complexity of the LGPs scales linearly in the number of independent variables, in stark contrast to the computationally expensive cubic scaling of conventional Gaussian processes. To illustrate the method, we trained a LGP surrogate model on the radial distribution function of liquid neon and observed a 1,760,000-fold speed-up compared to molecular dynamics simulation, beating a conventional GP by three orders-of-magnitude. We conclude that LGPs are robust and efficient surrogate models poised to expand the application of Bayesian inference in molecular simulations to a broad spectrum of experimental data.more » « less
-
Abstract Data-driven generative design (DDGD) methods utilize deep neural networks to create novel designs based on existing data. The structure-aware DDGD method can handle complex geometries and automate the assembly of separate components into systems, showing promise in facilitating creative designs. However, determining the appropriate vectorized design representation (VDR) to evaluate 3D shapes generated from the structure-aware DDGD model remains largely unexplored. To that end, we conducted a comparative analysis of surrogate models’ performance in predicting the engineering performance of 3D shapes using VDRs from two sources: the trained latent space of structure-aware DDGD models encoding structural and geometric information and an embedding method encoding only geometric information. We conducted two case studies: one involving 3D car models focusing on drag coefficients and the other involving 3D aircraft models considering both drag and lift coefficients. Our results demonstrate that using latent vectors as VDRs can significantly deteriorate surrogate models’ predictions. Moreover, increasing the dimensionality of the VDRs in the embedding method may not necessarily improve the prediction, especially when the VDRs contain more information irrelevant to the engineering performance. Therefore, when selecting VDRs for surrogate modeling, the latent vectors obtained from training structure-aware DDGD models must be used with caution, although they are more accessible once training is complete. The underlying physics associated with the engineering performance should be paid attention. This paper provides empirical evidence for the effectiveness of different types of VDRs of structure-aware DDGD for surrogate modeling, thus facilitating the construction of better surrogate models for AI-generated designs.more » « less
-
Multiscale systems biology is having an increasingly powerful impact on our understanding of the interconnected molecular, cellular, and microenvironmental drivers of tumor growth and the effects of novel drugs and drug combinations for cancer therapy. Agent-based models (ABMs) that treat cells as autonomous decision-makers, each with their own intrinsic characteristics, are a natural platform for capturing intratumoral heterogeneity. Agent-based models are also useful for integrating the multiple time and spatial scales associated with vascular tumor growth and response to treatment. Despite all their benefits, the computational costs of solving agent-based models escalate and become prohibitive when simulating millions of cells, making parameter exploration and model parameterization from experimental data very challenging. Moreover, such data are typically limited, coarse-grained and may lack any spatial resolution, compounding these challenges. We address these issues by developing a first-of-its-kind method that leverages explicitly formulated surrogate models (SMs) to bridge the current computational divide between agent-based models and experimental data. In our approach, Surrogate Modeling for Reconstructing Parameter Surfaces (SMoRe ParS), we quantify the uncertainty in the relationship between agent-based model inputs and surrogate model parameters, and between surrogate model parameters and experimental data. In this way, surrogate model parameters serve as intermediaries between agent-based model input and data, making it possible to use them for calibration and uncertainty quantification of agent-based model parameters that map directly onto an experimental data set. We illustrate the functionality and novelty of Surrogate Modeling for Reconstructing Parameter Surfaces by applying it to an agent-based model of 3D vascular tumor growth, and experimental data in the form of tumor volume time-courses. Our method is broadly applicable to situations where preserving underlying mechanistic information is of interest, and where computational complexity and sparse, noisy calibration data hinder model parameterization.more » « less
-
Simulating the time evolution of physical systems is pivotal in many scientific and engineering problems. An open challenge in simulating such systems is their multi-resolution dynamics: a small fraction of the system is extremely dynamic, and requires very fine-grained resolution, while a majority of the system is changing slowly and can be modeled by coarser spatial scales. Typical learning-based surrogate models use a uniform spatial scale, which needs to resolve to the finest required scale and can waste a huge compute to achieve required accuracy. We introduced Learning controllable Adaptive simulation for Multiresolution Physics (LAMP) as the first full deep learning-based surrogate model that jointly learns the evolution model and optimizes appropriate spatial resolutions that devote more compute to the highly dynamic regions. LAMP consists of a Graph Neural Network (GNN) for learning the forward evolution, and a GNNbased actor-critic for learning the policy of spatial refinement and coarsening. We introduced learning techniques that optimize LAMP with weighted sum of error and computational cost as objective, allowing LAMP to adapt to varying relative importance of error vs. computation tradeoff at inference time. We evaluated our method in a 1D benchmark of nonlinear PDEs and a challenging 2D mesh-based simulation. We demonstrated that our LAMP outperforms state-of-the-art deep learning surrogate models, and can adaptively trade-off computation to improve long-term prediction error: it achieves an average of 33.7% error reduction for 1D nonlinear PDEs, and outperforms MeshGraphNets + classical Adaptive Mesh Refinement (AMR) in 2D mesh-based simulations.more » « less
An official website of the United States government

