skip to main content


Search for: All records

Creators/Authors contains: "Sluse, D."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. When strong gravitational lenses are to be used as an astrophysical or cosmological probe, models of their mass distributions are often needed. We present a new, time-efficient automation code for the uniform modeling of strongly lensed quasars withGLEE, a lens-modeling software for multiband data. By using the observed positions of the lensed quasars and the spatially extended surface brightness distribution of the host galaxy of the lensed quasar, we obtain a model of the mass distribution of the lens galaxy. We applied this uniform modeling pipeline to a sample of nine strongly lensed quasars for which images were obtained with the Wide Field Camera 3 of theHubbleSpace Telescope. The models show well-reconstructed light components and a good alignment between mass and light centroids in most cases. We find that the automated modeling code significantly reduces the input time during the modeling process for the user. The time for preparing the required input files is reduced by a factor of 3 from ~3 h to about one hour. The active input time during the modeling process for the user is reduced by a factor of 10 from ~ 10 h to about one hour per lens system. This automated uniform modeling pipeline can efficiently produce uniform models of extensive lens-system samples that can be used for further cosmological analysis. A blind test that compared our results with those of an independent automated modeling pipeline based on the modeling softwareLenstronomyrevealed important lessons. Quantities such as Einstein radius, astrometry, mass flattening, and position angle are generally robustly determined. Other quantities, such as the radial slope of the mass density profile and predicted time delays, depend crucially on the quality of the data and on the accuracy with which the point spread function is reconstructed. Better data and/or a more detailed analysis are necessary to elevate our automated models to cosmography grade. Nevertheless, our pipeline enables the quick selection of lenses for follow-up and further modeling, which significantly speeds up the construction of cosmography-grade models. This important step forward will help us to take advantage of the increase in the number of lenses that is expected in the coming decade, which is an increase of several orders of magnitude.

     
    more » « less
    Free, publicly-accessible full text available April 1, 2024
  2. The importance of alternative methods for measuring the Hubble constant, such as time-delay cosmography, is highlighted by the recent Hubble tension. It is paramount to thoroughly investigate and rule out systematic biases in all measurement methods before we can accept new physics as the source of this tension. In this study, we perform a check for systematic biases in the lens modelling procedure of time-delay cosmography by comparing independent and blind time-delay predictions of the system WGD 2038−4008 from two teams using two different software programs:GLEEandLENSTRONOMY. The predicted time delays from the two teams incorporate the stellar kinematics of the deflector and the external convergence from line-of-sight structures. The un-blinded time-delay predictions from the two teams agree within 1.2σ, implying that once the time delay is measured the inferred Hubble constant will also be mutually consistent. However, there is a ∼4σdiscrepancy between the power-law model slope and external shear, which is a significant discrepancy at the level of lens models before the stellar kinematics and the external convergence are incorporated. We identify the difference in the reconstructed point spread function (PSF) to be the source of this discrepancy. When the same reconstructed PSF was used by both teams, we achieved excellent agreement, within ∼0.6σ, indicating that potential systematics stemming from source reconstruction algorithms and investigator choices are well under control. We recommend that future studies supersample the PSF as needed and marginalize over multiple algorithms or realizations for the PSF reconstruction to mitigate the systematics associated with the PSF. A future study will measure the time delays of the system WGD 2038−4008 and infer the Hubble constant based on our mass models.

     
    more » « less
  3. Abstract We investigate the environment and line of sight of the H0LiCOW lens B1608+656 using Subaru Suprime-Cam and the Hubble Space Telescope (HST) to perform a weak lensing analysis. We compare three different methods to reconstruct the mass map of the field, i.e. the standard Kaiser-Squires inversion coupled with inpainting and Gaussian or wavelet filtering, and ${\tt Glimpse}$ a method based on sparse regularization of the shear field. We find no substantial difference between the 2D mass reconstructions, but we find that the ground-based data is less sensitive to small-scale structures than the space-based observations. Marginalising over the results obtained with all the reconstruction techniques applied to the two available HST filters F606W and F814W, we estimate the external convergence, κext at the position of B1608+656 is $\kappa _{\mathrm{ext}}= 0.11^{+0.06}_{-0.04}$, where the error bars corresponds respectively to the 16th and 84th quartiles. This result is compatible with previous estimates using the number-counts technique, suggesting that B1608+656 resides in an over-dense line of sight, but with a completely different technique. Using our mass reconstructions, we also compare the convergence at the position of several groups of galaxies in the field of B1608+656 with the mass measurements using various analytical mass profiles, and find that the weak lensing results favor truncated halo models. 
    more » « less
  4. ABSTRACT The magnifications of compact-source lenses are extremely sensitive to the presence of low-mass dark matter haloes along the entire sightline from the source to the observer. Traditionally, the study of dark matter structure in compact-source strong gravitational lenses has been limited to radio-loud systems, as the radio emission is extended and thus unaffected by microlensing which can mimic the signal of dark matter structure. An alternate approach is to measure quasar nuclear-narrow-line emission, which is free from microlensing and present in virtually all quasar lenses. In this paper, we double the number of systems which can be used for gravitational lensing analyses by presenting measurements of narrow-line emission from a sample of eight quadruply imaged quasar lens systems, WGD J0405−3308, HS 0810+2554, RX J0911+0551, SDSS J1330+1810, PS J1606−2333, WFI 2026−4536, WFI 2033−4723, and WGD J2038−4008. We describe our updated grism spectral modelling pipeline, which we use to measure narrow-line fluxes with uncertainties of 2–10 per cent, presented here. We fit the lensed image positions with smooth mass models and demonstrate that these models fail to produce the observed distribution of image fluxes over the entire sample of lenses. Furthermore, typical deviations are larger than those expected from macromodel uncertainties. This discrepancy indicates the presence of perturbations caused by small-scale dark matter structure. The interpretation of this result in terms of dark matter models is presented in a companion paper. 
    more » « less
  5. ABSTRACT

    Gravitational time delays provide a powerful one-step measurement of H0, independent of all other probes. One key ingredient in time-delay cosmography are high-accuracy lens models. Those are currently expensive to obtain, both, in terms of computing and investigator time (105–106 CPU hours and ∼0.5–1 yr, respectively). Major improvements in modelling speed are therefore necessary to exploit the large number of lenses that are forecast to be discovered over the current decade. In order to bypass this roadblock, we develop an automated modelling pipeline and apply it to a sample of 31 lens systems, observed by the Hubble Space Telescope in multiple bands. Our automated pipeline can derive models for 30/31 lenses with few hours of human time and <100 CPU hours of computing time for a typical system. For each lens, we provide measurements of key parameters and predictions of magnification as well as time delays for the multiple images. We characterize the cosmography-readiness of our models using the stability of differences in the Fermat potential (proportional to time delay) with respect to modelling choices. We find that for 10/30 lenses, our models are cosmography or nearly cosmography grade (<3 per cent and 3–5 per cent variations). For 6/30 lenses, the models are close to cosmography grade (5–10 per cent). These results utilize informative priors and will need to be confirmed by further analysis. However, they are also likely to improve by extending the pipeline modelling sequence and options. In conclusion, we show that uniform cosmography grade modelling of large strong lens samples is within reach.

     
    more » « less
  6. Abstract

    We perform a search for galaxy–galaxy strong lens systems using a convolutional neural network (CNN) applied to imaging data from the first public data release of the DECam Local Volume Exploration Survey, which contains ∼520 million astronomical sources covering ∼4000 deg2of the southern sky to a 5σpoint–source depth ofg= 24.3,r= 23.9,i= 23.3, andz= 22.8 mag. Following the methodology of similar searches using Dark Energy Camera data, we apply color and magnitude cuts to select a catalog of ∼11 million extended astronomical sources. After scoring with our CNN, the highest-scoring 50,000 images were visually inspected and assigned a score on a scale from 0 (not a lens) to 3 (very probable lens). We present a list of 581 strong lens candidates, 562 of which are previously unreported. We categorize our candidates using their human-assigned scores, resulting in 55 Grade A candidates, 149 Grade B candidates, and 377 Grade C candidates. We additionally highlight eight potential quadruply lensed quasars from this sample. Due to the location of our search footprint in the northern Galactic cap (b> 10 deg) and southern celestial hemisphere (decl. < 0 deg), our candidate list has little overlap with other existing ground-based searches. Where our search footprint does overlap with other searches, we find a significant number of high-quality candidates that were previously unidentified, indicating a degree of orthogonality in our methodology. We report properties of our candidates including apparent magnitude and Einstein radius estimated from the image separation.

     
    more » « less
  7. null (Ed.)
    ABSTRACT In recent years, breakthroughs in methods and data have enabled gravitational time delays to emerge as a very powerful tool to measure the Hubble constant H0. However, published state-of-the-art analyses require of order 1 yr of expert investigator time and up to a million hours of computing time per system. Furthermore, as precision improves, it is crucial to identify and mitigate systematic uncertainties. With this time delay lens modelling challenge, we aim to assess the level of precision and accuracy of the modelling techniques that are currently fast enough to handle of order 50 lenses, via the blind analysis of simulated data sets. The results in Rungs 1 and 2 show that methods that use only the point source positions tend to have lower precision ($10\!-\!20{{\ \rm per\ cent}}$) while remaining accurate. In Rung 2, the methods that exploit the full information of the imaging and kinematic data sets can recover H0 within the target accuracy (|A| < 2 per cent) and precision (<6 per cent per system), even in the presence of a poorly known point spread function and complex source morphology. A post-unblinding analysis of Rung 3 showed the numerical precision of the ray-traced cosmological simulations to be insufficient to test lens modelling methodology at the percent level, making the results difficult to interpret. A new challenge with improved simulations is needed to make further progress in the investigation of systematic uncertainties. For completeness, we present the Rung 3 results in an appendix and use them to discuss various approaches to mitigating against similar subtle data generation effects in future blind challenges. 
    more » « less
  8. null (Ed.)
    We present six new time-delay measurements obtained from R c -band monitoring data acquired at the Max Planck Institute for Astrophysics (MPIA) 2.2 m telescope at La Silla observatory between October 2016 and February 2020. The lensed quasars HE 0047−1756, WG 0214−2105, DES 0407−5006, 2M 1134−2103, PSJ 1606−2333, and DES 2325−5229 were observed almost daily at high signal-to-noise ratio to obtain high-quality light curves where we can record fast and small-amplitude variations of the quasars. We measured time delays between all pairs of multiple images with only one or two seasons of monitoring with the exception of the time delays relative to image D of PSJ 1606−2333. The most precise estimate was obtained for the delay between image A and image B of DES 0407−5006, where τ AB = −128.4 −3.8 +3.5 d (2.8% precision) including systematics due to extrinsic variability in the light curves. For HE 0047−1756, we combined our high-cadence data with measurements from decade-long light curves from previous COSMOGRAIL campaigns, and reach a precision of 0.9 d on the final measurement. The present work demonstrates the feasibility of measuring time delays in lensed quasars in only one or two seasons, provided high signal-to-noise ratio data are obtained at a cadence close to daily. 
    more » « less
  9. null (Ed.)
    The H0LiCOW collaboration inferred via strong gravitational lensing time delays a Hubble constant value of H 0 = 73.3 −1.8 +1.7 km s −1 Mpc −1 , describing deflector mass density profiles by either a power-law or stars (constant mass-to-light ratio) plus standard dark matter halos. The mass-sheet transform (MST) that leaves the lensing observables unchanged is considered the dominant source of residual uncertainty in H 0 . We quantify any potential effect of the MST with a flexible family of mass models, which directly encodes it, and they are hence maximally degenerate with H 0 . Our calculation is based on a new hierarchical Bayesian approach in which the MST is only constrained by stellar kinematics. The approach is validated on mock lenses, which are generated from hydrodynamic simulations. We first applied the inference to the TDCOSMO sample of seven lenses, six of which are from H0LiCOW, and measured H 0 = 74.5 −6.1 +5.6 km s −1 Mpc −1 . Secondly, in order to further constrain the deflector mass density profiles, we added imaging and spectroscopy for a set of 33 strong gravitational lenses from the Sloan Lens ACS (SLACS) sample. For nine of the 33 SLAC lenses, we used resolved kinematics to constrain the stellar anisotropy. From the joint hierarchical analysis of the TDCOSMO+SLACS sample, we measured H 0 = 67.4 −3.2 +4.1 km s −1 Mpc −1 . This measurement assumes that the TDCOSMO and SLACS galaxies are drawn from the same parent population. The blind H0LiCOW, TDCOSMO-only and TDCOSMO+SLACS analyses are in mutual statistical agreement. The TDCOSMO+SLACS analysis prefers marginally shallower mass profiles than H0LiCOW or TDCOSMO-only. Without relying on the form of the mass density profile used by H0LiCOW, we achieve a ∼5% measurement of H 0 . While our new hierarchical analysis does not statistically invalidate the mass profile assumptions by H0LiCOW – and thus the H 0 measurement relying on them – it demonstrates the importance of understanding the mass density profile of elliptical galaxies. The uncertainties on H 0 derived in this paper can be reduced by physical or observational priors on the form of the mass profile, or by additional data. 
    more » « less
  10. Time-delay cosmography of lensed quasars has achieved 2.4% precision on the measurement of the Hubble constant, H 0 . As part of an ongoing effort to uncover and control systematic uncertainties, we investigate three potential sources: 1- stellar kinematics, 2- line-of-sight effects, and 3- the deflector mass model. To meet this goal in a quantitative way, we reproduced the H0LiCOW/SHARP/STRIDES (hereafter TDCOSMO) procedures on a set of real and simulated data, and we find the following. First, stellar kinematics cannot be a dominant source of error or bias since we find that a systematic change of 10% of measured velocity dispersion leads to only a 0.7% shift on H 0 from the seven lenses analyzed by TDCOSMO. Second, we find no bias to arise from incorrect estimation of the line-of-sight effects. Third, we show that elliptical composite (stars + dark matter halo), power-law, and cored power-law mass profiles have the flexibility to yield a broad range in H 0 values. However, the TDCOSMO procedures that model the data with both composite and power-law mass profiles are informative. If the models agree, as we observe in real systems owing to the “bulge-halo” conspiracy, H 0 is recovered precisely and accurately by both models. If the two models disagree, as in the case of some pathological models illustrated here, the TDCOSMO procedure either discriminates between them through the goodness of fit, or it accounts for the discrepancy in the final error bars provided by the analysis. This conclusion is consistent with a reanalysis of six of the TDCOSMO (real) lenses: the composite model yields H 0 = 74.0 −1.8 +1.7 km s −1 Mpc −1 , while the power-law model yields 74.2 −1.6 +1.6 km s −1 Mpc −1 . In conclusion, we find no evidence of bias or errors larger than the current statistical uncertainties reported by TDCOSMO. 
    more » « less