The importance of alternative methods for measuring the Hubble constant, such as timedelay cosmography, is highlighted by the recent Hubble tension. It is paramount to thoroughly investigate and rule out systematic biases in all measurement methods before we can accept new physics as the source of this tension. In this study, we perform a check for systematic biases in the lens modelling procedure of timedelay cosmography by comparing independent and blind timedelay predictions of the system WGD 2038−4008 from two teams using two different software programs:
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

GLEE andLENSTRONOMY . The predicted time delays from the two teams incorporate the stellar kinematics of the deflector and the external convergence from lineofsight structures. The unblinded timedelay predictions from the two teams agree within 1.2σ , implying that once the time delay is measured the inferred Hubble constant will also be mutually consistent. However, there is a ∼4σ discrepancy between the powerlaw model slope and external shear, which is a significant discrepancy at the level of lens models before the stellar kinematics and the external convergence are incorporated. We identify the difference in the reconstructed point spread function (PSF) to be the source of this discrepancy. When the same reconstructed PSF was used by both teams, we achieved excellent agreement, within ∼0.6σ , indicating that potential systematics stemming from source reconstruction algorithms and investigator choices are well under control. We recommend that future studies supersample the PSF as needed and marginalize over multiple algorithms or realizations for the PSF reconstruction to mitigate the systematics associated with the PSF. A future study will measure the time delays of the system WGD 2038−4008 and infer the Hubble constant based on our mass models. 
In this paper, we present work towards the development of a new data analytics and machine learning (ML) framework, called MagmaDNN. Our main goal is to provide scalable, highperformance data analytics and ML solutions for scientific applications running on current and upcoming heterogeneous manycore GPUaccelerated architectures. To this end, since many of the functionalities needed are based on standard linear algebra (LA) routines, we designed MagmaDNN to derive its performance power from the MAGMA library. The close integration provides the fundamental (scalable highperformance) LA routines available in MAGMA as a backend to MagmaDNN. We present some design issues for performance and scalability that are specific to ML using Deep Neural Networks (DNN), as well as the MagmaDNN designs towards overcoming them. In particular, MagmaDNN uses well established HPC techniques from the area of dense LA, including taskbased parallelization, DAG representations, scheduling, mixedprecision algorithms, asynchronous solvers, and autotuned hyperparameter optimization. We illustrate these techniques and their incorporation and use to outperform other frameworks, currently available.more » « less

Abstract We investigate the environment and line of sight of the H0LiCOW lens B1608+656 using Subaru SuprimeCam and the Hubble Space Telescope (HST) to perform a weak lensing analysis. We compare three different methods to reconstruct the mass map of the field, i.e. the standard KaiserSquires inversion coupled with inpainting and Gaussian or wavelet filtering, and ${\tt Glimpse}$ a method based on sparse regularization of the shear field. We find no substantial difference between the 2D mass reconstructions, but we find that the groundbased data is less sensitive to smallscale structures than the spacebased observations. Marginalising over the results obtained with all the reconstruction techniques applied to the two available HST filters F606W and F814W, we estimate the external convergence, κext at the position of B1608+656 is $\kappa _{\mathrm{ext}}= 0.11^{+0.06}_{0.04}$, where the error bars corresponds respectively to the 16th and 84th quartiles. This result is compatible with previous estimates using the numbercounts technique, suggesting that B1608+656 resides in an overdense line of sight, but with a completely different technique. Using our mass reconstructions, we also compare the convergence at the position of several groups of galaxies in the field of B1608+656 with the mass measurements using various analytical mass profiles, and find that the weak lensing results favor truncated halo models.more » « less

null (Ed.)The H0LiCOW collaboration inferred via strong gravitational lensing time delays a Hubble constant value of H 0 = 73.3 −1.8 +1.7 km s −1 Mpc −1 , describing deflector mass density profiles by either a powerlaw or stars (constant masstolight ratio) plus standard dark matter halos. The masssheet transform (MST) that leaves the lensing observables unchanged is considered the dominant source of residual uncertainty in H 0 . We quantify any potential effect of the MST with a flexible family of mass models, which directly encodes it, and they are hence maximally degenerate with H 0 . Our calculation is based on a new hierarchical Bayesian approach in which the MST is only constrained by stellar kinematics. The approach is validated on mock lenses, which are generated from hydrodynamic simulations. We first applied the inference to the TDCOSMO sample of seven lenses, six of which are from H0LiCOW, and measured H 0 = 74.5 −6.1 +5.6 km s −1 Mpc −1 . Secondly, in order to further constrain the deflector mass density profiles, we added imaging and spectroscopy for a set of 33 strong gravitational lenses from the Sloan Lens ACS (SLACS) sample. For nine of the 33 SLAC lenses, we used resolved kinematics to constrain the stellar anisotropy. From the joint hierarchical analysis of the TDCOSMO+SLACS sample, we measured H 0 = 67.4 −3.2 +4.1 km s −1 Mpc −1 . This measurement assumes that the TDCOSMO and SLACS galaxies are drawn from the same parent population. The blind H0LiCOW, TDCOSMOonly and TDCOSMO+SLACS analyses are in mutual statistical agreement. The TDCOSMO+SLACS analysis prefers marginally shallower mass profiles than H0LiCOW or TDCOSMOonly. Without relying on the form of the mass density profile used by H0LiCOW, we achieve a ∼5% measurement of H 0 . While our new hierarchical analysis does not statistically invalidate the mass profile assumptions by H0LiCOW – and thus the H 0 measurement relying on them – it demonstrates the importance of understanding the mass density profile of elliptical galaxies. The uncertainties on H 0 derived in this paper can be reduced by physical or observational priors on the form of the mass profile, or by additional data.more » « less

Timedelay cosmography of lensed quasars has achieved 2.4% precision on the measurement of the Hubble constant, H 0 . As part of an ongoing effort to uncover and control systematic uncertainties, we investigate three potential sources: 1 stellar kinematics, 2 lineofsight effects, and 3 the deflector mass model. To meet this goal in a quantitative way, we reproduced the H0LiCOW/SHARP/STRIDES (hereafter TDCOSMO) procedures on a set of real and simulated data, and we find the following. First, stellar kinematics cannot be a dominant source of error or bias since we find that a systematic change of 10% of measured velocity dispersion leads to only a 0.7% shift on H 0 from the seven lenses analyzed by TDCOSMO. Second, we find no bias to arise from incorrect estimation of the lineofsight effects. Third, we show that elliptical composite (stars + dark matter halo), powerlaw, and cored powerlaw mass profiles have the flexibility to yield a broad range in H 0 values. However, the TDCOSMO procedures that model the data with both composite and powerlaw mass profiles are informative. If the models agree, as we observe in real systems owing to the “bulgehalo” conspiracy, H 0 is recovered precisely and accurately by both models. If the two models disagree, as in the case of some pathological models illustrated here, the TDCOSMO procedure either discriminates between them through the goodness of fit, or it accounts for the discrepancy in the final error bars provided by the analysis. This conclusion is consistent with a reanalysis of six of the TDCOSMO (real) lenses: the composite model yields H 0 = 74.0 −1.8 +1.7 km s −1 Mpc −1 , while the powerlaw model yields 74.2 −1.6 +1.6 km s −1 Mpc −1 . In conclusion, we find no evidence of bias or errors larger than the current statistical uncertainties reported by TDCOSMO.more » « less

We present new measurements of the time delays of WFI2033−4723. The data sets used in this work include 14 years of data taken at the 1.2 m Leonhard Euler Swiss telescope, 13 years of data from the SMARTS 1.3 m telescope at Las Campanas Observatory and a single year of highcadence and highprecision monitoring at the MPIA 2.2 m telescope. The time delays measured from these different data sets, all taken in the R band, are in good agreement with each other and with previous measurements from the literature. Combining all the timedelay estimates from our data sets results in Δ t AB = 36.2 +0.7 −0.8 days (2.1% precision), Δ t AC = −23.3 +1.2 −1.4 days (5.6%) and Δ t BC = −59.4 +1.3 −1.3 days (2.2%). In addition, the close image pair A1A2 of the lensed quasars can be resolved in the MPIA 2.2 m data. We measure a time delay consistent with zero in this pair of images. We also explore the prior distributions of microlensing timedelay potentially affecting the cosmological timedelay measurements of WFI2033−4723. Our timedelay measurements are not precise enough to conclude that microlensing time delay is present or absent from the data. This work is part of a H0LiCOW series focusing on measuring the Hubble constant from WFI2033−4723.more » « less