skip to main content

Title: Impact of point spread function higher moments error on weak gravitational lensing

Weak gravitational lensing is one of the most powerful tools for cosmology, while subject to challenges in quantifying subtle systematic biases. The point spread function (PSF) can cause biases in weak lensing shear inference when the PSF model does not match the true PSF that is convolved with the galaxy light profile. Although the effect of PSF size and shape errors – i.e. errors in second moments – is well studied, weak lensing systematics associated with errors in higher moments of the PSF model require further investigation. The goal of our study is to estimate their potential impact for LSST weak lensing analysis. We go beyond second moments of the PSF by using image simulations to relate multiplicative bias in shear to errors in the higher moments of the PSF model. We find that the current level of errors in higher moments of the PSF model in data from the Hyper Suprime-Cam survey can induce a ∼0.05 per cent shear bias, making this effect unimportant for ongoing surveys but relevant at the precision of upcoming surveys such as LSST.

; ; ; ; ; ; ; ; ;
Publication Date:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Page Range or eLocation-ID:
p. 1978-1993
Oxford University Press
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Weak lensing measurements suffer from well-known shear estimation biases, which can be partially corrected for with the use of image simulations. In this work we present an analysis of simulated images that mimic Hubble Space Telescope/Advance Camera for Surveys observations of high-redshift galaxy clusters, including cluster specific issues such as non-weak shear and increased blending. Our synthetic galaxies have been generated to have similar observed properties as the background-selected source samples studied in the real images. First, we used simulations with galaxies placed on a grid to determine a revised signal-to-noise-dependent ( S / N KSB ) correction for multiplicative shear measurement bias, and to quantify the sensitivity of our KSB+ bias calibration to mismatches of galaxy or PSF properties between the real data and the simulations. Next, we studied the impact of increased blending and light contamination from cluster and foreground galaxies, finding it to be negligible for high-redshift ( z  >  0.7) clusters, whereas shear measurements can be affected at the ∼1% level for lower redshift clusters given their brighter member galaxies. Finally, we studied the impact of fainter neighbours and selection bias using a set of simulated images that mimic the positions and magnitudes of galaxies inmore »Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS) data, thereby including realistic clustering. While the initial SExtractor object detection causes a multiplicative shear selection bias of −0.028 ± 0.002, this is reduced to −0.016 ± 0.002 by further cuts applied in our pipeline. Given the limited depth of the CANDELS data, we compared our CANDELS-based estimate for the impact of faint neighbours on the multiplicative shear measurement bias to a grid-based analysis, to which we added clustered galaxies to even fainter magnitudes based on Hubble Ultra Deep Field data, yielding a refined estimate of ∼ − 0.013. Our sensitivity analysis suggests that our pipeline is calibrated to an accuracy of ∼0.015 once all corrections are applied, which is fully sufficient for current and near-future weak lensing studies of high-redshift clusters. As an application, we used it for a refined analysis of three highly relaxed clusters from the South Pole Telescope Sunyaev-Zeldovich survey, where we now included measurements down to the cluster core ( r  >  200 kpc) as enabled by our work. Compared to previously employed scales ( r  >  500 kpc), this tightens the cluster mass constraints by a factor 1.38 on average.« less

    Recent works have shown that weak lensing magnification must be included in upcoming large-scale structure analyses, such as for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), to avoid biasing the cosmological results. In this work, we investigate whether including magnification has a positive impact on the precision of the cosmological constraints, as well as being necessary to avoid bias. We forecast this using an LSST mock catalogue and a halo model to calculate the galaxy power spectra. We find that including magnification has little effect on the precision of the cosmological parameter constraints for an LSST galaxy clustering analysis, where the halo model parameters are additionally constrained by the galaxy luminosity function. In particular, we find that for the LSST gold sample (i < 25.3) including weak lensing magnification only improves the galaxy clustering constraint on Ωm by a factor of 1.03, and when using a very deep LSST mock sample (i < 26.5) by a factor of 1.3. Since magnification predominantly contributes to the clustering measurement and provides similar information to that of cosmic shear, this improvement would be reduced for a combined galaxy clustering and shear analysis. We also confirm that notmore »modelling weak lensing magnification will catastrophically bias the cosmological results from LSST. Magnification must therefore be included in LSST large-scale structure analyses even though it does not significantly enhance the precision of the cosmological constraints.

    « less

    Although photometric redshifts (photo-z’s) are crucial ingredients for current and upcoming large-scale surveys, the high-quality spectroscopic redshifts currently available to train, validate, and test them are substantially non-representative in both magnitude and colour. We investigate the nature and structure of this bias by tracking how objects from a heterogeneous training sample contribute to photo-z predictions as a function of magnitude and colour, and illustrate that the underlying redshift distribution at fixed colour can evolve strongly as a function of magnitude. We then test the robustness of the galaxy–galaxy lensing signal in 120 deg2 of HSC–SSP DR1 data to spectroscopic completeness and photo-z biases, and find that their impacts are sub-dominant to current statistical uncertainties. Our methodology provides a framework to investigate how spectroscopic incompleteness can impact photo-z-based weak lensing predictions in future surveys such as LSST and WFIRST.


    Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 yr of the full 10-yr Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12 per cent of the observed galaxies are ‘unrecognized’ blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear–shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. An ∼0.025 decrease inmore »the derived structure growth parameter S8 = σ8(Ωm/0.3)0.5 is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-z-based tomography. This bias is greater than the 2σ statistical error in measuring S8.

    « less
  5. Abstract Modifications of the matter power spectrum due to baryonic physics are one of the major theoretical uncertainties in cosmological weak lensing measurements. Developing robust mitigation schemes for this source of systematic uncertainty increases the robustness of cosmological constraints, and may increase their precision if they enable the use of information from smaller scales. Here we explore the performance of two mitigation schemes for baryonic effects in weak lensing cosmic shear: the principal component analysis (PCA) method and the halo-model approach in hmcode. We construct mock tomographic shear power spectra from four hydrodynamical simulations, and run simulated likelihood analyses with cosmolike assuming LSST-like survey statistics. With an angular scale cut of ℓmax < 2000, both methods successfully remove the biases in cosmological parameters due to the various baryonic physics scenarios, with the PCA method causing less degradation in the parameter constraints than hmcode. For a more aggressive ℓmax = 5000, the PCA method performs well for all but one baryonic physics scenario, requiring additional training simulations to account for the extreme baryonic physics scenario of Illustris; hmcode exhibits tensions in the 2D posterior distributions of cosmological parameters due to lack of freedom in describing the power spectrum for $k \gt 10\more »h^{-1}\, \mathrm{Mpc}$. We investigate variants of the PCA method and improve the bias mitigation through PCA by accounting for the noise properties in the data via Cholesky decomposition of the covariance matrix. Our improved PCA method allows us to retain more statistical constraining power while effectively mitigating baryonic uncertainties even for a broad range of baryonic physics scenarios.« less