skip to main content


Title: Forecasting the potential of weak lensing magnification to enhance LSST large-scale structure analyses
ABSTRACT

Recent works have shown that weak lensing magnification must be included in upcoming large-scale structure analyses, such as for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), to avoid biasing the cosmological results. In this work, we investigate whether including magnification has a positive impact on the precision of the cosmological constraints, as well as being necessary to avoid bias. We forecast this using an LSST mock catalogue and a halo model to calculate the galaxy power spectra. We find that including magnification has little effect on the precision of the cosmological parameter constraints for an LSST galaxy clustering analysis, where the halo model parameters are additionally constrained by the galaxy luminosity function. In particular, we find that for the LSST gold sample (i < 25.3) including weak lensing magnification only improves the galaxy clustering constraint on Ωm by a factor of 1.03, and when using a very deep LSST mock sample (i < 26.5) by a factor of 1.3. Since magnification predominantly contributes to the clustering measurement and provides similar information to that of cosmic shear, this improvement would be reduced for a combined galaxy clustering and shear analysis. We also confirm that not modelling weak lensing magnification will catastrophically bias the cosmological results from LSST. Magnification must therefore be included in LSST large-scale structure analyses even though it does not significantly enhance the precision of the cosmological constraints.

 
more » « less
NSF-PAR ID:
10366650
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
513
Issue:
1
ISSN:
0035-8711
Format(s):
Medium: X Size: p. 1210-1228
Size(s):
["p. 1210-1228"]
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Modifications of the matter power spectrum due to baryonic physics are one of the major theoretical uncertainties in cosmological weak lensing measurements. Developing robust mitigation schemes for this source of systematic uncertainty increases the robustness of cosmological constraints, and may increase their precision if they enable the use of information from smaller scales. Here we explore the performance of two mitigation schemes for baryonic effects in weak lensing cosmic shear: the principal component analysis (PCA) method and the halo-model approach in hmcode. We construct mock tomographic shear power spectra from four hydrodynamical simulations, and run simulated likelihood analyses with cosmolike assuming LSST-like survey statistics. With an angular scale cut of ℓmax < 2000, both methods successfully remove the biases in cosmological parameters due to the various baryonic physics scenarios, with the PCA method causing less degradation in the parameter constraints than hmcode. For a more aggressive ℓmax = 5000, the PCA method performs well for all but one baryonic physics scenario, requiring additional training simulations to account for the extreme baryonic physics scenario of Illustris; hmcode exhibits tensions in the 2D posterior distributions of cosmological parameters due to lack of freedom in describing the power spectrum for $k \gt 10\ h^{-1}\, \mathrm{Mpc}$. We investigate variants of the PCA method and improve the bias mitigation through PCA by accounting for the noise properties in the data via Cholesky decomposition of the covariance matrix. Our improved PCA method allows us to retain more statistical constraining power while effectively mitigating baryonic uncertainties even for a broad range of baryonic physics scenarios. 
    more » « less
  2. ABSTRACT

    Recent cosmological analyses with large-scale structure and weak lensing measurements, usually referred to as 3 × 2pt, had to discard a lot of signal to noise from small scales due to our inability to accurately model non-linearities and baryonic effects. Galaxy–galaxy lensing, or the position–shear correlation between lens and source galaxies, is one of the three two-point correlation functions that are included in such analyses, usually estimated with the mean tangential shear. However, tangential shear measurements at a given angular scale θ or physical scale R carry information from all scales below that, forcing the scale cuts applied in real data to be significantly larger than the scale at which theoretical uncertainties become problematic. Recently, there have been a few independent efforts that aim to mitigate the non-locality of the galaxy–galaxy lensing signal. Here, we perform a comparison of the different methods, including the Y-transformation, the point-mass marginalization methodology, and the annular differential surface density statistic. We do the comparison at the cosmological constraints level in a combined galaxy clustering and galaxy–galaxy lensing analysis. We find that all the estimators yield equivalent cosmological results assuming a simulated Rubin Observatory Legacy Survey of Space and Time (LSST) Year 1 like set-up and also when applied to DES Y3 data. With the LSST Y1 set-up, we find that the mitigation schemes yield ∼1.3 times more constraining S8 results than applying larger scale cuts without using any mitigation scheme.

     
    more » « less
  3. ABSTRACT

    The combination of galaxy–galaxy lensing (GGL) and galaxy clustering is a powerful probe of low-redshift matter clustering, especially if it is extended to the non-linear regime. To this end, we use an N-body and halo occupation distribution (HOD) emulator method to model the redMaGiC sample of colour-selected passive galaxies in the Dark Energy Survey (DES), adding parameters that describe central galaxy incompleteness, galaxy assembly bias, and a scale-independent multiplicative lensing bias Alens. We use this emulator to forecast cosmological constraints attainable from the GGL surface density profile ΔΣ(rp) and the projected galaxy correlation function wp, gg(rp) in the final (Year 6) DES data set over scales $r_p=0.3\!-\!30.0\, h^{-1} \, \mathrm{Mpc}$. For a $3{{\ \rm per\ cent}}$ prior on Alens we forecast precisions of $1.9{{\ \rm per\ cent}}$, $2.0{{\ \rm per\ cent}}$, and $1.9{{\ \rm per\ cent}}$ on Ωm, σ8, and $S_8 \equiv \sigma _8\Omega _m^{0.5}$, marginalized over all halo occupation distribution (HOD) parameters as well as Alens. Adding scales $r_p=0.3\!-\!3.0\, h^{-1} \, \mathrm{Mpc}$ improves the S8 precision by a factor of ∼1.6 relative to a large scale ($3.0\!-\!30.0\, h^{-1} \, \mathrm{Mpc}$) analysis, equivalent to increasing the survey area by a factor of ∼2.6. Sharpening the Alens prior to $1{{\ \rm per\ cent}}$ further improves the S8 precision to $1.1{{\ \rm per\ cent}}$, and it amplifies the gain from including non-linear scales. Our emulator achieves per cent-level accuracy similar to the projected DES statistical uncertainties, demonstrating the feasibility of a fully non-linear analysis. Obtaining precise parameter constraints from multiple galaxy types and from measurements that span linear and non-linear clustering offers many opportunities for internal cross-checks, which can diagnose systematics and demonstrate the robustness of cosmological results.

     
    more » « less
  4. ABSTRACT

    We measure the small-scale clustering of the Data Release 16 extended Baryon Oscillation Spectroscopic Survey Luminous Red Galaxy sample, corrected for fibre-collisions using Pairwise Inverse Probability weights, which give unbiased clustering measurements on all scales. We fit to the monopole and quadrupole moments and to the projected correlation function over the separation range $7-60\, h^{-1}{\rm Mpc}$ with a model based on the aemulus cosmological emulator to measure the growth rate of cosmic structure, parametrized by fσ8. We obtain a measurement of fσ8(z = 0.737) = 0.408 ± 0.038, which is 1.4σ lower than the value expected from 2018 Planck data for a flat ΛCDM model, and is more consistent with recent weak-lensing measurements. The level of precision achieved is 1.7 times better than more standard measurements made using only the large-scale modes of the same sample. We also fit to the data using the full range of scales $0.1\text{--}60\, h^{-1}{\rm Mpc}$ modelled by the aemulus cosmological emulator and find a 4.5σ tension in the amplitude of the halo velocity field with the Planck + ΛCDM model, driven by a mismatch on the non-linear scales. This may not be cosmological in origin, and could be due to a breakdown in the Halo Occupation Distribution model used in the emulator. Finally, we perform a robust analysis of possible sources of systematics, including the effects of redshift uncertainty and incompleteness due to target selection that were not included in previous analyses fitting to clustering measurements on small scales.

     
    more » « less
  5. ABSTRACT

    We investigate the sensitivity to the effects of lensing magnification on large-scale structure analyses combining photometric cosmic shear and galaxy clustering data (i.e. the now commonly called ‘3 × 2-point’ analysis). Using a Fisher matrix bias formalism, we disentangle the contribution to the bias on cosmological parameters caused by ignoring the effects of magnification in a theory fit from individual elements in the data vector, for Stage-III and Stage-IV surveys. We show that the removal of elements of the data vectors that are dominated by magnification does not guarantee a reduction in the cosmological bias due to the magnification signal, but can instead increase the sensitivity to magnification. We find that the most sensitive elements of the data vector come from the shear-clustering cross-correlations, particularly between the highest redshift shear bin and any lower redshift lens sample, and that the parameters ΩM, $S_8=\sigma _8\sqrt{\Omega _\mathrm{ M}/0.3}$, and w0 show the most significant biases for both survey models. Our forecasts predict that current analyses are not significantly biased by magnification, but this bias will become highly significant with the continued increase of statistical power in the near future. We therefore conclude that future surveys should measure and model the magnification as part of their flagship ‘3 × 2-point’ analysis.

     
    more » « less