skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Characterizing the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova with laser interferometric data: A neural network approach
We present a methodology based on the implementation of a fully connected neural network algorithm to estimate the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova (CCSN). For this study, we selected a fully connected deep neural network (DNN) regression model because it can learn both linear and nonlinear relationships between the input and output data, it is more appropriate for handling large-dimensional input data, and it offers high performance at a low computational cost. To train the Machine Learning (ML) algorithm, we construct a training dataset using synthetic waveforms, and several CCSN waveforms are used to test the algorithm. We performed a first-order estimation of the high-frequency gravitational wave emission on real interferometric LIGO data from the second half of the third observing run (O3b) with a two detector network (L1 and H1). The relative error associated with the estimate of the slope of the resonant frequency versus time for the GW from CCSN signals is within 13% for the tested candidates included in this study up to different Galactic distances (1.0, 2.3, 3.1, 4.3, 5.4, 7.3, and 10 kpc). This method is, to date, the best estimate of the temporal evolution of the high-frequency emission in real interferometric data. Our methodology of estimation can be used in future studies focused on physical properties of the progenitor. The distances where comparable performances could be achieved for Einstein Telescope and Cosmic Explorer roughly rescale with the noise floor improvements.  more » « less
Award ID(s):
2110555 2110060
PAR ID:
10522057
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Physical Review D
Date Published:
Journal Name:
Physical Review D
Volume:
108
Issue:
8
ISSN:
2470-0010
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a methodology based on the implementation of a fully connected neural network algorithm to estimate the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova (CCSN). For this study, we selected a fully connected deep neural network (DNN) regression model because it can learn both linear and nonlinear relationships between the input and output data, it is more appropriate for handling large-dimensional input data, and it offers high performance at a low computational cost. To train the Machine Learning (ML) algorithm, we construct a training dataset using synthetic waveforms, and several CCSN waveforms are used to test the algorithm. We performed a first-order estimation of the high-frequency gravitational wave emission on real interferometric LIGO data from the second half of the third observing run (O3b) with a two detector network (L1 and H1). The relative error associated with the estimate of the slope of the resonant frequency versus time for the GW from CCSN signals is within 13% for the tested candidates included in this study up to different Galactic distances (1.0, 2.3, 3.1, 4.3, 5.4, 7.3, and 10 kpc). This method is, to date, the best estimate of the temporal evolution of the high-frequency emission in real interferometric data. Our methodology of estimation can be used in future studies focused on physical properties of the progenitor. The distances where comparable performances could be achieved for Einstein Telescope and Cosmic Explorer roughly rescale with the noise floor improvements. 
    more » « less
  2. We present a methodology based on the implementation of a fully connected neural network algorithm to estimate the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova (CCSN). For this study, we selected a fully connected deep neural network (DNN) regression model because it can learn both linear and nonlinear relationships between the input and output data, it is more appropriate for handling large-dimensional input data, and it offers high performance at a low computational cost. To train the Machine Learning (ML) algorithm, we construct a training dataset using synthetic waveforms, and several CCSN waveforms are used to test the algorithm. We performed a first-order estimation of the high-frequency gravitational wave emission on real interferometric LIGO data from the second half of the third observing run (O3b) with a two detector network (L1 and H1). The relative error associated with the estimate of the slope of the resonant frequency versus time for the GW from CCSN signals is within 13% for the tested candidates included in this study up to different Galactic distances (1.0, 2.3, 3.1, 4.3, 5.4, 7.3, and 10 kpc). This method is, to date, the best estimate of the temporal evolution of the high-frequency emission in real interferometric data. Our methodology of estimation can be used in future studies focused on physical properties of the progenitor. The distances where comparable performances could be achieved for Einstein Telescope and Cosmic Explorer roughly rescale with the noise floor improvements. 
    more » « less
  3. Abstract We develop and characterize a parameter estimation methodology for rotating core collapse supernovae based on the gravitational wave core bounce phase and real detector noise. Expanding on the evidence from numerical simulations for the deterministic nature of this gravitational wave emission and about the dependence on the ratio $$\beta$$ between rotational kinetic to potential energy, we propose an analytical model for the core bounce component which depends on $$\beta$$ and one phenomenological parameter. We validate the goodness of the model with a pool of representative waveforms. We use the fitting factor adopted in compact coalescing binary searches as a metric to quantify the goodness of the analytical model and the template bank generated by the model presents an average accuracy of 94.4\% when compared with the numerical simulations and is used as the basis for the work. The error for a matched filter frequentist parameter estimation of $$\beta$$ is evaluated. The results obtained considering real interferometric noise and a waveform at a distance of 10 kpc and optimal orientation, for one standard deviation estimation error of the rotation parameter \(\beta\) lie in the range of \(10^{-2}\) to \(10^{-3}\) as \(\beta\) increases. The results are also compared to the scenario where Gaussian recolored data is employed. The analytical model also allows for the first time, to compute theoretical minima in the error for $$\beta$$ for any type of estimator. Our analysis indicates that the presence of rotation would be detectable at 0.5 Mpc for third generation interferometers like CE or ET. 
    more » « less
  4. We present a new class of AI models for the detection of quasi-circular, spinning, non-precessing binary black hole mergers whose waveforms include the higher order gravitational wave modes , and mode mixing effects in the harmonics. These AI models combine hybrid dilated convolution neural networks to accurately model both short- and long-range temporal sequential information of gravitational waves; and graph neural networks to capture spatial correlations among gravitational wave observatories to consistently describe and identify the presence of a signal in a three detector network encompassing the Advanced LIGO and Virgo detectors. We first trained these spatiotemporal-graph AI models using synthetic noise, using 1.2 million modeled waveforms to densely sample this signal manifold, within 1.7 h using 256 NVIDIA A100 GPUs in the Polaris supercomputer at the Argonne Leadership Computing Facility. This distributed training approach exhibited optimal classification performance, and strong scaling up to 512 NVIDIA A100 GPUs. With these AI ensembles we processed data from a three detector network, and found that an ensemble of 4 AI models achieves state-of-the-art performance for signal detection, and reports two misclassifications for every decade of searched data. We distributed AI inference over 128 GPUs in the Polaris supercomputer and 128 nodes in the Theta supercomputer, and completed the processing of a decade of gravitational wave data from a three detector network within 3.5 h. Finally, we fine-tuned these AI ensembles to process the entire month of February 2020, which is part of the O3b LIGO/Virgo observation run, and found 6 gravitational waves, concurrently identified in Advanced LIGO and Advanced Virgo data, and zero false positives. This analysis was completed in one hour using one NVIDIA A100 GPU. 
    more » « less
  5. Abstract We introduce deep learning models to estimate the masses of the binary components of black hole mergers, ( m 1 , m 2 ) , and three astrophysical properties of the post-merger compact remnant, namely, the final spin, a f , and the frequency and damping time of the ringdown oscillations of the fundamental ℓ = m = 2 bar mode, ( ω R , ω I ) . Our neural networks combine a modified WaveNet architecture with contrastive learning and normalizing flow. We validate these models against a Gaussian conjugate prior family whose posterior distribution is described by a closed analytical expression. Upon confirming that our models produce statistically consistent results, we used them to estimate the astrophysical parameters ( m 1 , m 2 , a f , ω R , ω I ) of five binary black holes: GW150914 , GW170104 , GW170814 , GW190521 and GW190630 . We use PyCBC Inference to directly compare traditional Bayesian methodologies for parameter estimation with our deep learning based posterior distributions. Our results show that our neural network models predict posterior distributions that encode physical correlations, and that our data-driven median results and 90% confidence intervals are similar to those produced with gravitational wave Bayesian analyses. This methodology requires a single V100 NVIDIA GPU to produce median values and posterior distributions within two milliseconds for each event. This neural network, and a tutorial for its use, are available at the Data and Learning Hub for Science . 
    more » « less