When strong gravitational lenses are to be used as an astrophysical or cosmological probe, models of their mass distributions are often needed. We present a new, timeefficient automation code for the uniform modeling of strongly lensed quasars with
Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

GLEE , a lensmodeling software for multiband data. By using the observed positions of the lensed quasars and the spatially extended surface brightness distribution of the host galaxy of the lensed quasar, we obtain a model of the mass distribution of the lens galaxy. We applied this uniform modeling pipeline to a sample of nine strongly lensed quasars for which images were obtained with the Wide Field Camera 3 of theHubble Space Telescope. The models show wellreconstructed light components and a good alignment between mass and light centroids in most cases. We find that the automated modeling code significantly reduces the input time during the modeling process for the user. The time for preparing the required input files is reduced by a factor of 3 from ~3 h to about one hour. The active input time during the modeling process for the user is reduced by a factor of 10 from ~ 10 h to about one hour per lens system. This automated uniform modeling pipeline can efficiently produce uniform models of extensive lenssystem samples that can be used for further cosmological analysis. A blind test that compared our results with those of an independent automated modeling pipeline based on the modeling softwareLenstronomy revealed important lessons. Quantities such as Einstein radius, astrometry, mass flattening, and position angle are generally robustly determined. Other quantities, such as the radial slope of the mass density profile and predicted time delays, depend crucially on the quality of the data and on the accuracy with which the point spread function is reconstructed. Better data and/or a more detailed analysis are necessary to elevate our automated models to cosmography grade. Nevertheless, our pipeline enables the quick selection of lenses for followup and further modeling, which significantly speeds up the construction of cosmographygrade models. This important step forward will help us to take advantage of the increase in the number of lenses that is expected in the coming decade, which is an increase of several orders of magnitude. 
The importance of alternative methods for measuring the Hubble constant, such as timedelay cosmography, is highlighted by the recent Hubble tension. It is paramount to thoroughly investigate and rule out systematic biases in all measurement methods before we can accept new physics as the source of this tension. In this study, we perform a check for systematic biases in the lens modelling procedure of timedelay cosmography by comparing independent and blind timedelay predictions of the system WGD 2038−4008 from two teams using two different software programs:
GLEE andLENSTRONOMY . The predicted time delays from the two teams incorporate the stellar kinematics of the deflector and the external convergence from lineofsight structures. The unblinded timedelay predictions from the two teams agree within 1.2σ , implying that once the time delay is measured the inferred Hubble constant will also be mutually consistent. However, there is a ∼4σ discrepancy between the powerlaw model slope and external shear, which is a significant discrepancy at the level of lens models before the stellar kinematics and the external convergence are incorporated. We identify the difference in the reconstructed point spread function (PSF) to be the source of this discrepancy. When the same reconstructed PSF was used by both teams, we achieved excellent agreement, within ∼0.6σ , indicating that potential systematics stemming from source reconstruction algorithms and investigator choices are well under control. We recommend that future studies supersample the PSF as needed and marginalize over multiple algorithms or realizations for the PSF reconstruction to mitigate the systematics associated with the PSF. A future study will measure the time delays of the system WGD 2038−4008 and infer the Hubble constant based on our mass models. 
Time delay cosmography uses the arrival time delays between images in strong gravitational lenses to measure cosmological parameters, in particular the Hubble constant H 0 . The lens models used in time delay cosmography omit dark matter subhalos and lineofsight halos because their effects are assumed to be negligible. We explicitly quantify this assumption by analyzing mock lens systems that include full populations of dark matter subhalos and lineofsight halos, applying the same modeling assumptions used in the literature to infer H 0 . We base the mock lenses on six quadruply imaged quasars that have delivered measurements of the Hubble constant, and quantify the additional uncertainties and/or bias on a lensbylens basis. We show that omitting dark substructure does not bias inferences of H 0 . However, perturbations from substructure contribute an additional source of random uncertainty in the inferred value of H 0 that scales as the square root of the lensing volume divided by the longest time delay. This additional source of uncertainty, for which we provide a fitting function, ranges from 0.7 − 2.4%. It may need to be incorporated in the error budget as the precision of cosmographic inferences from single lenses improves, and it sets a precision limit on inferences from single lenses.more » « less

ABSTRACT The magnifications of compactsource lenses are extremely sensitive to the presence of lowmass dark matter haloes along the entire sightline from the source to the observer. Traditionally, the study of dark matter structure in compactsource strong gravitational lenses has been limited to radioloud systems, as the radio emission is extended and thus unaffected by microlensing which can mimic the signal of dark matter structure. An alternate approach is to measure quasar nuclearnarrowline emission, which is free from microlensing and present in virtually all quasar lenses. In this paper, we double the number of systems which can be used for gravitational lensing analyses by presenting measurements of narrowline emission from a sample of eight quadruply imaged quasar lens systems, WGD J0405−3308, HS 0810+2554, RX J0911+0551, SDSS J1330+1810, PS J1606−2333, WFI 2026−4536, WFI 2033−4723, and WGD J2038−4008. We describe our updated grism spectral modelling pipeline, which we use to measure narrowline fluxes with uncertainties of 2–10 per cent, presented here. We fit the lensed image positions with smooth mass models and demonstrate that these models fail to produce the observed distribution of image fluxes over the entire sample of lenses. Furthermore, typical deviations are larger than those expected from macromodel uncertainties. This discrepancy indicates the presence of perturbations caused by smallscale dark matter structure. The interpretation of this result in terms of dark matter models is presented in a companion paper.more » « less

ABSTRACT Gravitational time delays provide a powerful onestep measurement of H0, independent of all other probes. One key ingredient in timedelay cosmography are highaccuracy lens models. Those are currently expensive to obtain, both, in terms of computing and investigator time (105–106 CPU hours and ∼0.5–1 yr, respectively). Major improvements in modelling speed are therefore necessary to exploit the large number of lenses that are forecast to be discovered over the current decade. In order to bypass this roadblock, we develop an automated modelling pipeline and apply it to a sample of 31 lens systems, observed by the Hubble Space Telescope in multiple bands. Our automated pipeline can derive models for 30/31 lenses with few hours of human time and <100 CPU hours of computing time for a typical system. For each lens, we provide measurements of key parameters and predictions of magnification as well as time delays for the multiple images. We characterize the cosmographyreadiness of our models using the stability of differences in the Fermat potential (proportional to time delay) with respect to modelling choices. We find that for 10/30 lenses, our models are cosmography or nearly cosmography grade (<3 per cent and 3–5 per cent variations). For 6/30 lenses, the models are close to cosmography grade (5–10 per cent). These results utilize informative priors and will need to be confirmed by further analysis. However, they are also likely to improve by extending the pipeline modelling sequence and options. In conclusion, we show that uniform cosmography grade modelling of large strong lens samples is within reach.

Abstract We perform a search for galaxy–galaxy strong lens systems using a convolutional neural network (CNN) applied to imaging data from the first public data release of the DECam Local Volume Exploration Survey, which contains ∼520 million astronomical sources covering ∼4000 deg^{2}of the southern sky to a 5
σ point–source depth ofg = 24.3,r = 23.9,i = 23.3, andz = 22.8 mag. Following the methodology of similar searches using Dark Energy Camera data, we apply color and magnitude cuts to select a catalog of ∼11 million extended astronomical sources. After scoring with our CNN, the highestscoring 50,000 images were visually inspected and assigned a score on a scale from 0 (not a lens) to 3 (very probable lens). We present a list of 581 strong lens candidates, 562 of which are previously unreported. We categorize our candidates using their humanassigned scores, resulting in 55 Grade A candidates, 149 Grade B candidates, and 377 Grade C candidates. We additionally highlight eight potential quadruply lensed quasars from this sample. Due to the location of our search footprint in the northern Galactic cap (b > 10 deg) and southern celestial hemisphere (decl. < 0 deg), our candidate list has little overlap with other existing groundbased searches. Where our search footprint does overlap with other searches, we find a significant number of highquality candidates that were previously unidentified, indicating a degree of orthogonality in our methodology. We report properties of our candidates including apparent magnitude and Einstein radius estimated from the image separation. 
null (Ed.)ABSTRACT In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant H0. However, published stateoftheart analyses require of order 1 yr of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge, we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated data sets. The results in Rungs 1 and 2 show that methods that use only the point source positions tend to have lower precision ($10\!\!20{{\ \rm per\ cent}}$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic data sets can recover H0 within the target accuracy (A < 2 per cent) and precision (<6 per cent per system), even in the presence of a poorly known point spread function and complex source morphology. A postunblinding analysis of Rung 3 showed the numerical precision of the raytraced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges.more » « less

null (Ed.)The H0LiCOW collaboration inferred via strong gravitational lensing time delays a Hubble constant value of H 0 = 73.3 −1.8 +1.7 km s −1 Mpc −1 , describing deflector mass density profiles by either a powerlaw or stars (constant masstolight ratio) plus standard dark matter halos. The masssheet transform (MST) that leaves the lensing observables unchanged is considered the dominant source of residual uncertainty in H 0 . We quantify any potential effect of the MST with a flexible family of mass models, which directly encodes it, and they are hence maximally degenerate with H 0 . Our calculation is based on a new hierarchical Bayesian approach in which the MST is only constrained by stellar kinematics. The approach is validated on mock lenses, which are generated from hydrodynamic simulations. We first applied the inference to the TDCOSMO sample of seven lenses, six of which are from H0LiCOW, and measured H 0 = 74.5 −6.1 +5.6 km s −1 Mpc −1 . Secondly, in order to further constrain the deflector mass density profiles, we added imaging and spectroscopy for a set of 33 strong gravitational lenses from the Sloan Lens ACS (SLACS) sample. For nine of the 33 SLAC lenses, we used resolved kinematics to constrain the stellar anisotropy. From the joint hierarchical analysis of the TDCOSMO+SLACS sample, we measured H 0 = 67.4 −3.2 +4.1 km s −1 Mpc −1 . This measurement assumes that the TDCOSMO and SLACS galaxies are drawn from the same parent population. The blind H0LiCOW, TDCOSMOonly and TDCOSMO+SLACS analyses are in mutual statistical agreement. The TDCOSMO+SLACS analysis prefers marginally shallower mass profiles than H0LiCOW or TDCOSMOonly. Without relying on the form of the mass density profile used by H0LiCOW, we achieve a ∼5% measurement of H 0 . While our new hierarchical analysis does not statistically invalidate the mass profile assumptions by H0LiCOW – and thus the H 0 measurement relying on them – it demonstrates the importance of understanding the mass density profile of elliptical galaxies. The uncertainties on H 0 derived in this paper can be reduced by physical or observational priors on the form of the mass profile, or by additional data.more » « less

Abstract Gravitationally lensed supernovae (LSNe) are important probes of cosmic expansion, but they remain rare and difficult to find. Current cosmic surveys likely contain 5–10 LSNe in total while nextgeneration experiments are expected to contain several hundred to a few thousand of these systems. We search for these systems in observed Dark Energy Survey (DES) five year SN fields—10 3 sq. deg. regions of sky imaged in the
griz bands approximately every six nights over five years. To perform the search, we utilize the DeepZipper approach: a multibranch deep learning architecture trained on imagelevel simulations of LSNe that simultaneously learns spatial and temporal relationships from time series of images. We find that our method obtains an LSN recall of 61.13% and a falsepositive rate of 0.02% on the DES SN field data. DeepZipper selected 2245 candidates from a magnitudelimited (m _{i}< 22.5) catalog of 3,459,186 systems. We employ human visual inspection to review systems selected by the network and find three candidate LSNe in the DES SN fields. 
Timedelay cosmography of lensed quasars has achieved 2.4% precision on the measurement of the Hubble constant, H 0 . As part of an ongoing effort to uncover and control systematic uncertainties, we investigate three potential sources: 1 stellar kinematics, 2 lineofsight effects, and 3 the deflector mass model. To meet this goal in a quantitative way, we reproduced the H0LiCOW/SHARP/STRIDES (hereafter TDCOSMO) procedures on a set of real and simulated data, and we find the following. First, stellar kinematics cannot be a dominant source of error or bias since we find that a systematic change of 10% of measured velocity dispersion leads to only a 0.7% shift on H 0 from the seven lenses analyzed by TDCOSMO. Second, we find no bias to arise from incorrect estimation of the lineofsight effects. Third, we show that elliptical composite (stars + dark matter halo), powerlaw, and cored powerlaw mass profiles have the flexibility to yield a broad range in H 0 values. However, the TDCOSMO procedures that model the data with both composite and powerlaw mass profiles are informative. If the models agree, as we observe in real systems owing to the “bulgehalo” conspiracy, H 0 is recovered precisely and accurately by both models. If the two models disagree, as in the case of some pathological models illustrated here, the TDCOSMO procedure either discriminates between them through the goodness of fit, or it accounts for the discrepancy in the final error bars provided by the analysis. This conclusion is consistent with a reanalysis of six of the TDCOSMO (real) lenses: the composite model yields H 0 = 74.0 −1.8 +1.7 km s −1 Mpc −1 , while the powerlaw model yields 74.2 −1.6 +1.6 km s −1 Mpc −1 . In conclusion, we find no evidence of bias or errors larger than the current statistical uncertainties reported by TDCOSMO.more » « less