skip to main content


Title: Galaxy blending effects in deep imaging cosmic shear probes of cosmology
ABSTRACT

Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 yr of the full 10-yr Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12 per cent of the observed galaxies are ‘unrecognized’ blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear–shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. An ∼0.025 decrease in the derived structure growth parameter S8 = σ8(Ωm/0.3)0.5 is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-z-based tomography. This bias is greater than the 2σ statistical error in measuring S8.

 
more » « less
NSF-PAR ID:
10368895
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
514
Issue:
4
ISSN:
0035-8711
Page Range / eLocation ID:
p. 5905-5926
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT

    Tidal features in the outskirts of galaxies yield unique information about their past interactions and are a key prediction of the hierarchical structure formation paradigm. The Vera C. Rubin Observatory is poised to deliver deep observations for potentially millions of objects with visible tidal features, but the inference of galaxy interaction histories from such features is not straightforward. Utilizing automated techniques and human visual classification in conjunction with realistic mock images produced using the NewHorizon cosmological simulation, we investigate the nature, frequency, and visibility of tidal features and debris across a range of environments and stellar masses. In our simulated sample, around 80 per cent of the flux in the tidal features around Milky Way or greater mass galaxies is detected at the 10-yr depth of the Legacy Survey of Space and Time (30–31 mag arcsec−2), falling to 60 per cent assuming a shallower final depth of 29.5 mag arcsec−2. The fraction of total flux found in tidal features increases towards higher masses, rising to 10 per cent for the most massive objects in our sample (M⋆ ∼ 1011.5 M⊙). When observed at sufficient depth, such objects frequently exhibit many distinct tidal features with complex shapes. The interpretation and characterization of such features varies significantly with image depth and object orientation, introducing significant biases in their classification. Assuming the data reduction pipeline is properly optimized, we expect the Rubin Observatory to be capable of recovering much of the flux found in the outskirts of Milky Way mass galaxies, even at intermediate redshifts (z < 0.2).

     
    more » « less
  2. null (Ed.)
    Weak lensing measurements suffer from well-known shear estimation biases, which can be partially corrected for with the use of image simulations. In this work we present an analysis of simulated images that mimic Hubble Space Telescope/Advance Camera for Surveys observations of high-redshift galaxy clusters, including cluster specific issues such as non-weak shear and increased blending. Our synthetic galaxies have been generated to have similar observed properties as the background-selected source samples studied in the real images. First, we used simulations with galaxies placed on a grid to determine a revised signal-to-noise-dependent ( S / N KSB ) correction for multiplicative shear measurement bias, and to quantify the sensitivity of our KSB+ bias calibration to mismatches of galaxy or PSF properties between the real data and the simulations. Next, we studied the impact of increased blending and light contamination from cluster and foreground galaxies, finding it to be negligible for high-redshift ( z  >  0.7) clusters, whereas shear measurements can be affected at the ∼1% level for lower redshift clusters given their brighter member galaxies. Finally, we studied the impact of fainter neighbours and selection bias using a set of simulated images that mimic the positions and magnitudes of galaxies in Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) data, thereby including realistic clustering. While the initial SExtractor object detection causes a multiplicative shear selection bias of −0.028 ± 0.002, this is reduced to −0.016 ± 0.002 by further cuts applied in our pipeline. Given the limited depth of the CANDELS data, we compared our CANDELS-based estimate for the impact of faint neighbours on the multiplicative shear measurement bias to a grid-based analysis, to which we added clustered galaxies to even fainter magnitudes based on Hubble Ultra Deep Field data, yielding a refined estimate of ∼ − 0.013. Our sensitivity analysis suggests that our pipeline is calibrated to an accuracy of ∼0.015 once all corrections are applied, which is fully sufficient for current and near-future weak lensing studies of high-redshift clusters. As an application, we used it for a refined analysis of three highly relaxed clusters from the South Pole Telescope Sunyaev-Zeldovich survey, where we now included measurements down to the cluster core ( r  >  200 kpc) as enabled by our work. Compared to previously employed scales ( r  >  500 kpc), this tightens the cluster mass constraints by a factor 1.38 on average. 
    more » « less
  3. ABSTRACT

    Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3 × 2pt, had to discard a lot of signal to noise from small scales due to our inability to accurately model non-linearities and baryonic effects. Galaxy–galaxy lensing, or the position–shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale θ or physical scale R carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently, there have been a few independent efforts that aim to mitigate the non-locality of the galaxy–galaxy lensing signal. Here, we perform a comparison of the different methods, including the Y-transformation, the point-mass marginalization methodology, and the annular differential surface density statistic. We do the comparison at the cosmological constraints level in a combined galaxy clustering and galaxy–galaxy lensing analysis. We find that all the estimators yield equivalent cosmological results assuming a simulated Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like set-up and also when applied to DES Y3 data. With the LSST Y1 set-up, we find that the mitigation schemes yield ∼1.3 times more constraining S8 results than applying larger scale cuts without using any mitigation scheme.

     
    more » « less
  4. ABSTRACT

    Recent works have shown that weak lensing magnification must be included in upcoming large-scale structure analyses, such as for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), to avoid biasing the cosmological results. In this work, we investigate whether including magnification has a positive impact on the precision of the cosmological constraints, as well as being necessary to avoid bias. We forecast this using an LSST mock catalogue and a halo model to calculate the galaxy power spectra. We find that including magnification has little effect on the precision of the cosmological parameter constraints for an LSST galaxy clustering analysis, where the halo model parameters are additionally constrained by the galaxy luminosity function. In particular, we find that for the LSST gold sample (i < 25.3) including weak lensing magnification only improves the galaxy clustering constraint on Ωm by a factor of 1.03, and when using a very deep LSST mock sample (i < 26.5) by a factor of 1.3. Since magnification predominantly contributes to the clustering measurement and provides similar information to that of cosmic shear, this improvement would be reduced for a combined galaxy clustering and shear analysis. We also confirm that not modelling weak lensing magnification will catastrophically bias the cosmological results from LSST. Magnification must therefore be included in LSST large-scale structure analyses even though it does not significantly enhance the precision of the cosmological constraints.

     
    more » « less
  5. ABSTRACT

    In cosmology, we routinely choose between models to describe our data, and can incur biases due to insufficient models or lose constraining power with overly complex models. In this paper, we propose an empirical approach to model selection that explicitly balances parameter bias against model complexity. Our method uses synthetic data to calibrate the relation between bias and the χ2 difference between models. This allows us to interpret χ2 values obtained from real data (even if catalogues are blinded) and choose a model accordingly. We apply our method to the problem of intrinsic alignments – one of the most significant weak lensing systematics, and a major contributor to the error budget in modern lensing surveys. Specifically, we consider the example of the Dark Energy Survey Year 3 (DES Y3), and compare the commonly used non-linear alignment (NLA) and tidal alignment and tidal torque (TATT) models. The models are calibrated against bias in the Ωm–S8 plane. Once noise is accounted for, we find that it is possible to set a threshold Δχ2 that guarantees an analysis using NLA is unbiased at some specified level Nσ and confidence level. By contrast, we find that theoretically defined thresholds (based on, e.g. p-values for χ2) tend to be overly optimistic, and do not reliably rule out cosmological biases up to ∼1–2σ. Considering the real DES Y3 cosmic shear results, based on the reported difference in χ2 from NLA and TATT analyses, we find a roughly $30{{\ \rm per\ cent}}$ chance that were NLA to be the fiducial model, the results would be biased (in the Ωm–S8 plane) by more than 0.3σ. More broadly, the method we propose here is simple and general, and requires a relatively low level of resources. We foresee applications to future analyses as a model selection tool in many contexts.

     
    more » « less