Gravitational-wave observations of neutron star mergers can probe the nuclear equation of state by measuring the imprint of the neutron star’s tidal deformability on the signal. We investigate the ability of future gravitational-wave observations to produce a precise measurement of the equation of state from binary neutron star inspirals. Because measurability of the tidal effect depends on the equation of state, we explore several equations of state that span current observational constraints. We generate a population of binary neutron stars as seen by a simulated Advanced LIGO–Virgo network, as well as by a planned Cosmic Explorer observatory. We perform Bayesian inference to measure the parameters of each signal, and we combine measurements across each population to determine
- Award ID(s):
- 1836814
- NSF-PAR ID:
- 10279716
- Date Published:
- Journal Name:
- Physical review letters
- ISSN:
- 1092-0145
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract R 1.4, the radius of a 1.4M ⊙neutron star. We find that, with 321 signals, the LIGO–Virgo network is able to measureR 1.4to better than 2% precision for all equations of state we consider; however, we also find that achieving this precision could take decades of observation, depending on the equation of state and the merger rate. On the other hand, we find that with one year of observation, Cosmic Explorer will measureR 1.4to better than 0.6% precision. In both cases, we find that systematic biases, such as from an incorrect mass prior, can significantly impact measurement accuracy, and efforts will be required to mitigate these effects. -
Abstract Since the very first detection of gravitational waves from the coalescence of two black holes in 2015, Bayesian statistical methods have been routinely applied by LIGO and Virgo to extract the signal out of noisy interferometric measurements, obtain point estimates of the physical parameters responsible for producing the signal, and rigorously quantify their uncertainties. Different computational techniques have been devised depending on the source of the gravitational radiation and the gravitational waveform model used. Prominent sources of gravitational waves are binary black hole or neutron star mergers, the only objects that have been observed by detectors to date. But also gravitational waves from core‐collapse supernovae, rapidly rotating neutron stars, and the stochastic gravitational‐wave background are in the sensitivity band of the ground‐based interferometers and expected to be observable in future observation runs. As nonlinearities of the complex waveforms and the high‐dimensional parameter spaces preclude analytic evaluation of the posterior distribution, posterior inference for all these sources relies on computer‐intensive simulation techniques such as Markov chain Monte Carlo methods. A review of state‐of‐the‐art Bayesian statistical parameter estimation methods will be given for researchers in this cross‐disciplinary area of gravitational wave data analysis.
This article is categorized under:
Applications of Computational Statistics > Signal and Image Processing and Coding
Statistical and Graphical Methods of Data Analysis > Markov Chain Monte Carlo (MCMC)
Statistical Models > Time Series Models
-
ABSTRACT Gravitational waves from binary neutron star post-merger remnants have the potential to uncover the physics of the hot nuclear equation of state. These gravitational-wave signals are high frequency (∼kHz) and short-lived ($\mathcal {O}(10\, \mathrm{ms})$), which introduces potential problems for data analysis algorithms due to the presence of non-stationary and non-Gaussian noise artefacts in gravitational-wave observatories. We quantify the degree to which these noise features in LIGO data may affect our confidence in identifying post-merger gravitational-wave signals. We show that the combination of vetoing data with non-stationary glitches and the application of the Allen χ2 veto (usually reserved for long-lived lower frequency gravitational-wave signals), allows one to confidently detect post-merger signals with signal-to-noise ratio ρ ≳ 8. We discuss the need to incorporate the data quality checks and vetoes into realistic post-merger gravitational-wave searches, and describe their relevance to calculating realistic false-alarm and false-dismissal rates.
-
null (Ed.)ABSTRACT The detection of the optical transient AT2017gfo proved that binary neutron star mergers are progenitors of kilonovae (KNe). Using a combination of numerical-relativity and radiative-transfer simulations, the community has developed sophisticated models for these transients for a wide portion of the expected parameter space. Using these simulations and surrogate models made from them, it has been possible to perform Bayesian inference of the observed signals to infer properties of the ejected matter. It has been pointed out that combining inclination constraints derived from the KN with gravitational-wave measurements increases the accuracy with which binary parameters can be estimated, in particular breaking the distance-inclination degeneracy from gravitational wave inference. To avoid bias from the unknown ejecta geometry, constraints on the inclination angle for AT2017gfo should be insensitive to the employed models. In this work, we compare different assumptions about the ejecta and radiative reprocesses used by the community and we investigate their impact on the parameter inference. While most inferred parameters agree, we find disagreement between posteriors for the inclination angle for different geometries that have been used in the current literature. According to our study, the inclusion of reprocessing of the photons between different ejecta types improves the modeling fits to AT2017gfo and, in some cases, affects the inferred constraints. Our study motivates the inclusion of large ∼ 1-mag uncertainties in the KN models employed for Bayesian analysis to capture yet unknown systematics, especially when inferring inclination angles, although smaller uncertainties seem appropriate to capture model systematics for other intrinsic parameters. We can use this method to impose soft constraints on the ejecta geometry of the KN AT2017gfo.more » « less
-
Abstract We present an improved version of the nested sampling algorithm
nessai in which the core algorithm is modified to use importance weights. In the modified algorithm, samples are drawn from a mixture of normalising flows and the requirement for samples to be independently and identically distributed (i.i.d.) according to the prior is relaxed. Furthermore, it allows for samples to be added in any order, independently of a likelihood constraint, and for the evidence to be updated with batches of samples. We call the modified algorithmi-nessai . We first validatei-nessai using analytic likelihoods with known Bayesian evidences and show that the evidence estimates are unbiased in up to 32 dimensions. We comparei-nessai to standardnessai for the analytic likelihoods and the Rosenbrock likelihood, the results show thati-nessai is consistent withnessai whilst producing more precise evidence estimates. We then testi-nessai on 64 simulated gravitational-wave signals from binary black hole coalescence and show that it produces unbiased estimates of the parameters. We compare our results to those obtained using standardnessai anddynesty and find thati-nessai requires 2.68 and 13.3 times fewer likelihood evaluations to converge, respectively. We also testi-nessai of an 80 s simulated binary neutron star signal using a reduced-order-quadrature basis and find that, on average, it converges in 24 min, whilst only requiring likelihood evaluations compared to fornessai and fordynesty . These results demonstrate thati-nessai is consistent withnessai anddynesty whilst also being more efficient.