skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 15 until 2:00 AM ET on Saturday, November 16 due to maintenance. We apologize for the inconvenience.


Title: Characterizing the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova with laser interferometric data: A neural network approach
We present a methodology based on the implementation of a fully connected neural network algorithm to estimate the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova (CCSN). For this study, we selected a fully connected deep neural network (DNN) regression model because it can learn both linear and nonlinear relationships between the input and output data, it is more appropriate for handling large-dimensional input data, and it offers high performance at a low computational cost. To train the Machine Learning (ML) algorithm, we construct a training dataset using synthetic waveforms, and several CCSN waveforms are used to test the algorithm. We performed a first-order estimation of the high-frequency gravitational wave emission on real interferometric LIGO data from the second half of the third observing run (O3b) with a two detector network (L1 and H1). The relative error associated with the estimate of the slope of the resonant frequency versus time for the GW from CCSN signals is within 13% for the tested candidates included in this study up to different Galactic distances (1.0, 2.3, 3.1, 4.3, 5.4, 7.3, and 10 kpc). This method is, to date, the best estimate of the temporal evolution of the high-frequency emission in real interferometric data. Our methodology of estimation can be used in future studies focused on physical properties of the progenitor. The distances where comparable performances could be achieved for Einstein Telescope and Cosmic Explorer roughly rescale with the noise floor improvements.  more » « less
Award ID(s):
1806692
NSF-PAR ID:
10534893
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
American Physical Society
Date Published:
Journal Name:
Physical review D
ISSN:
2470-0029
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We present a methodology based on the implementation of a fully connected neural network algorithm to estimate the temporal evolution of the high-frequency gravitational wave emission for a core collapse supernova (CCSN). For this study, we selected a fully connected deep neural network (DNN) regression model because it can learn both linear and nonlinear relationships between the input and output data, it is more appropriate for handling large-dimensional input data, and it offers high performance at a low computational cost. To train the Machine Learning (ML) algorithm, we construct a training dataset using synthetic waveforms, and several CCSN waveforms are used to test the algorithm. We performed a first-order estimation of the high-frequency gravitational wave emission on real interferometric LIGO data from the second half of the third observing run (O3b) with a two detector network (L1 and H1). The relative error associated with the estimate of the slope of the resonant frequency versus time for the GW from CCSN signals is within 13% for the tested candidates included in this study up to different Galactic distances (1.0, 2.3, 3.1, 4.3, 5.4, 7.3, and 10 kpc). This method is, to date, the best estimate of the temporal evolution of the high-frequency emission in real interferometric data. Our methodology of estimation can be used in future studies focused on physical properties of the progenitor. The distances where comparable performances could be achieved for Einstein Telescope and Cosmic Explorer roughly rescale with the noise floor improvements. 
    more » « less
  2. ABSTRACT

    We investigate the impact of rotation and magnetic fields on the dynamics and gravitational wave emission in 2D core–collapse supernova simulations with neutrino transport. We simulate 17 different models of $15\, {\rm M}_\odot$ and $39\, {\rm M}_\odot$ progenitor stars with various initial rotation profiles and initial magnetic fields strengths up to $10^{12}\, \mathrm{G}$, assuming a dipolar field geometry in the progenitor. Strong magnetic fields generally prove conducive to shock revival, though this trend is not without exceptions. The impact of rotation on the post-bounce dynamics is more variegated, in line with previous studies. A significant impact on the time-frequency structure of the gravitational wave signal is found only for rapid rotation or strong initial fields. For rapid rotation, the angular momentum gradient at the proto-neutron star surface can appreciably affect the frequency of the dominant mode, so that known analytic relations for the high-frequency emission band no longer hold. In case of two magnetorotational explosion models, the deviation from these analytic relations is even more pronounced. One of the magnetorotational explosions has been evolved to more than half a second after the onset of the explosion and shows a subsidence of high-frequency emission at late times. Its most conspicuous gravitational wave signature is a high-amplitude tail signal. We also estimate the maximum detection distances for our waveforms. The magnetorotational models do not stick out for higher detectability during the post-bounce and explosion phase.

     
    more » « less
  3. Abstract

    We evaluate several neural-network architectures, both convolutional and recurrent, for gravitational-wave time-series feature extraction by performing point parameter estimation on noisy waveforms from binary-black-hole mergers. We build datasets of 100 000 elements for each of four different waveform models (or approximants) in order to test how approximant choice affects feature extraction. Our choices includeSEOBNRv4PandIMRPhenomPv3, which contain only the dominant quadrupole emission mode, alongsideIMRPhenomPv3HMandNRHybSur3dq8, which also account for high-order modes. Each dataset element is injected into detector noise corresponding to the third observing run of the LIGO-Virgo-KAGRA (LVK) collaboration. We identify the temporal convolutional network architecture as the overall best performer in terms of training and validation losses and absence of overfitting to data. Comparison of results between datasets shows that the choice of waveform approximant for the creation of a dataset conditions the feature extraction ability of a trained network. Hence, care should be taken when building a dataset for the training of neural networks, as certain approximants may result in better network convergence of evaluation metrics. However, this performance does not necessarily translate to data which is more faithful to numerical relativity simulations. We also apply this network on actual signals from LVK runs, finding that its feature-extracting performance can be effective on real data.

     
    more » « less
  4. Abstract

    Understanding the noise in gravitational-wave detectors is central to detecting and interpreting gravitational-wave signals. Glitches are transient, non-Gaussian noise features that can have a range of environmental and instrumental origins. The Gravity Spy project uses a machine-learning algorithm to classify glitches based upon their time–frequency morphology. The resulting set of classified glitches can be used as input to detector-characterisation investigations of how to mitigate glitches, or data-analysis studies of how to ameliorate the impact of glitches. Here we present the results of the Gravity Spy analysis of data up to the end of the third observing run of advanced laser interferometric gravitational-wave observatory (LIGO). We classify 233981 glitches from LIGO Hanford and 379805 glitches from LIGO Livingston into morphological classes. We find that the distribution of glitches differs between the two LIGO sites. This highlights the potential need for studies of data quality to be individually tailored to each gravitational-wave observatory.

     
    more » « less
  5. Abstract

    Tissue dynamics play critical roles in many physiological functions and provide important metrics for clinical diagnosis. Capturing real-time high-resolution 3D images of tissue dynamics, however, remains a challenge. This study presents a hybrid physics-informed neural network algorithm that infers 3D flow-induced tissue dynamics and other physical quantities from sparse 2D images. The algorithm combines a recurrent neural network model of soft tissue with a differentiable fluid solver, leveraging prior knowledge in solid mechanics to project the governing equation on a discrete eigen space. The algorithm uses a Long-short-term memory-based recurrent encoder-decoder connected with a fully connected neural network to capture the temporal dependence of flow-structure-interaction. The effectiveness and merit of the proposed algorithm is demonstrated on synthetic data from a canine vocal fold model and experimental data from excised pigeon syringes. The results showed that the algorithm accurately reconstructs 3D vocal dynamics, aerodynamics, and acoustics from sparse 2D vibration profiles.

     
    more » « less