Gravitational time delays provide a powerful one-step measurement of H0, independent of all other probes. One key ingredient in time-delay cosmography are high-accuracy lens models. Those are currently expensive to obtain, both, in terms of computing and investigator time (105–106 CPU hours and ∼0.5–1 yr, respectively). Major improvements in modelling speed are therefore necessary to exploit the large number of lenses that are forecast to be discovered over the current decade. In order to bypass this roadblock, we develop an automated modelling pipeline and apply it to a sample of 31 lens systems, observed by the Hubble Space Telescope in multiple bands. Our automated pipeline can derive models for 30/31 lenses with few hours of human time and <100 CPU hours of computing time for a typical system. For each lens, we provide measurements of key parameters and predictions of magnification as well as time delays for the multiple images. We characterize the cosmography-readiness of our models using the stability of differences in the Fermat potential (proportional to time delay) with respect to modelling choices. We find that for 10/30 lenses, our models are cosmography or nearly cosmography grade (<3 per cent and 3–5 per cent variations). For 6/30 lenses, the models are close to cosmography grade (5–10 per cent). These results utilize informative priors and will need to be confirmed by further analysis. However, they are also likely to improve by extending the pipeline modelling sequence and options. In conclusion, we show that uniform cosmography grade modelling of large strong lens samples is within reach.
Time delay lens modelling challenge
ABSTRACT In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant H0. However, published state-of-the-art analyses require of order 1 yr of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge, we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated data sets. The results in Rungs 1 and 2 show that methods that use only the point source positions tend to have lower precision ($10\!-\!20{{\ \rm per\ cent}}$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic data sets can recover H0 within the target accuracy (|A| < 2 per cent) and precision (<6 per cent per system), even in the presence of a poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.
more »
« less
- PAR ID:
- 10287442
- Author(s) / Creator(s):
- ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more »
- Date Published:
- Journal Name:
- Monthly Notices of the Royal Astronomical Society
- Volume:
- 503
- Issue:
- 1
- ISSN:
- 0035-8711
- Page Range / eLocation ID:
- 1096 to 1123
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
ABSTRACT -
null (Ed.)ABSTRACT Strongly lensed explosive transients such as supernovae, gamma-ray bursts, fast radio bursts, and gravitational waves are very promising tools to determine the Hubble constant (H0) in the near future in addition to strongly lensed quasars. In this work, we show that the transient nature of the point source provides an advantage over quasars: The lensed host galaxy can be observed before or after the transient’s appearance. Therefore, the lens model can be derived from images free of contamination from bright point sources. We quantify this advantage by comparing the precision of a lens model obtained from the same lenses with and without point sources. Based on Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) observations with the same sets of lensing parameters, we simulate realistic mock data sets of 48 quasar lensing systems (i.e. adding AGN in the galaxy centre) and 48 galaxy–galaxy lensing systems (assuming the transient source is not visible but the time delay and image positions have been or will be measured). We then model the images and compare the inferences of the lens model parameters and H0. We find that the precision of the lens models (in terms of the deflector mass slope) is better by a factor of 4.1 for the sample without lensed point sources, resulting in an increase of H0 precision by a factor of 2.9. The opportunity to observe the lens systems without the transient point sources provides an additional advantage for time-delay cosmography over lensed quasars. It facilitates the determination of higher signal-to-noise stellar kinematics of the main deflector, and thus its mass density profile, which, in turn plays a key role in breaking the mass-sheet degeneracy and constraining H0.more » « less
-
ABSTRACT It is well known that measurements of H0 from gravitational lens time delays scale as H0 ∝ 1 − κE, where κE is the mean convergence at the Einstein radius RE but that all available lens data other than the delays provide no direct constraints on κE. The properties of the radial mass distribution constrained by lens data are RE and the dimensionless quantity ξ = REα″(RE)/(1 − κE), where α″(RE) is the second derivative of the deflection profile at RE. Lens models with too few degrees of freedom, like power-law models with densities ρ ∝ r−n, have a one-to-one correspondence between ξ and κE (for a power-law model, ξ = 2(n − 2) and κE = (3 − n)/2 = (2 − ξ)/4). This means that highly constrained lens models with few parameters quickly lead to very precise but inaccurate estimates of κE and hence H0. Based on experiments with a broad range of plausible dark matter halo models, it is unlikely that any current estimates of H0 from gravitational lens time delays are more accurate than ${\sim} 10{{\ \rm per\ cent}}$, regardless of the reported precision.more » « less
-
Time delay cosmography uses the arrival time delays between images in strong gravitational lenses to measure cosmological parameters, in particular the Hubble constant H0. The lens models used in time delay cosmography omit dark matter subhalos and line-of-sight halos because their effects are assumed to be negligible. We explicitly quantify this assumption by analyzing realistic mock lens systems that include full populations of dark matter subhalos and line-of-sight halos, applying the same modeling assumptions used in the literature to infer H0. We base the mock lenses on six quadruply-imaged quasars that have delivered measurements of the Hubble constant, and quantify the additional uncertainties and/or bias on a lens-by-lens basis. We show that omitting dark substructure does not bias inferences of H0. However, perturbations from substructure contribute an additional source of random uncertainty in the inferred value of H0 that scales as the square root of the lensing volume divided by the longest time delay. This additional source of uncertainty, for which we provide a fitting function, ranges from 0.6−2.4%. It may need to be incorporated in the error budget as the precision of cosmographic inferences from single lenses improves, and sets a precision limit on inferences from single lenses.more » « less
-
ABSTRACT The Hubble Frontier Fields data, along with multiple data sets obtained by other telescopes, have provided some of the most extensive constraints on cluster lenses to date. Multiple lens modelling teams analyzed the fields and made public a number of deliverables. By comparing these results, we can then undertake a unique and vital test of the state of cluster lens modelling. Specifically, we see how well the different teams can reproduce similar magnifications and mass profiles. We find that the circularly averaged mass profiles of the fields are remarkably constrained (scatter $\lt 5{{\ \rm per\ cent}}$) at distances of 1 arcmin from the cluster core, yet magnifications can vary significantly. Averaged across the six fields, we find a bias of −6 per cent (−17 per cent) and a scatter of ∼40 per cent (∼65 per cent) at a modest magnification of 3 (10). Statistical errors reported by individual teams are often significantly smaller than the differences among all the teams, indicating the importance of continued systematics studies in cluster lensing.more » « less