skip to main content


Search for: All records

Creators/Authors contains: "LSST Dark Energy Science Collaboration"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    We present a study of the potential for convolutional neural networks (CNNs) to enable separation of astrophysical transients from image artifacts, a task known as “real–bogus” classification, without requiring a template-subtracted (or difference) image, which requires a computationally expensive process to generate, involving image matching on small spatial scales in large volumes of data. Using data from the Dark Energy Survey, we explore the use of CNNs to (1) automate the real–bogus classification and (2) reduce the computational costs of transient discovery. We compare the efficiency of two CNNs with similar architectures, one that uses “image triplets” (templates, search, and difference image) and one that takes as input the template and search only. We measure the decrease in efficiency associated with the loss of information in input, finding that the testing accuracy is reduced from ∼96% to ∼91.1%. We further investigate how the latter model learns the required information from the template and search by exploring the saliency maps. Our work (1) confirms that CNNs are excellent models for real–bogus classification that rely exclusively on the imaging data and require no feature engineering task and (2) demonstrates that high-accuracy (>90%) models can be built without the need to construct difference images, but some accuracy is lost. Because, once trained, neural networks can generate predictions at minimal computational costs, we argue that future implementations of this methodology could dramatically reduce the computational costs in the detection of transients in synoptic surveys like Rubin Observatory's Legacy Survey of Space and Time by bypassing the difference image analysis entirely.

     
    more » « less
  2. ABSTRACT

    As we observe a rapidly growing number of astrophysical transients, we learn more about the diverse host galaxy environments in which they occur. Host galaxy information can be used to purify samples of cosmological Type Ia supernovae, uncover the progenitor systems of individual classes, and facilitate low-latency follow-up of rare and peculiar explosions. In this work, we develop a novel data-driven methodology to simulate the time-domain sky that includes detailed modelling of the probability density function for multiple transient classes conditioned on host galaxy magnitudes, colours, star formation rates, and masses. We have designed these simulations to optimize photometric classification and analysis in upcoming large synoptic surveys. We integrate host galaxy information into the snana simulation framework to construct the simulated catalogue of optical transients and correlated hosts (SCOTCH, a publicly available catalogue of 5-million idealized transient light curves in LSST passbands and their host galaxy properties over the redshift range 0 < z < 3. This catalogue includes supernovae, tidal disruption events, kilonovae, and active galactic nuclei. Each light curve consists of true top-of-the-galaxy magnitudes sampled with high (≲2 d) cadence. In conjunction with SCOTCH, we also release an associated set of tutorials and transient-specific libraries to enable simulations of arbitrary space- and ground-based surveys. Our methodology is being used to test critical science infrastructure in advance of surveys by the Vera C. Rubin Observatory and the Nancy G. Roman Space Telescope.

     
    more » « less
  3. ABSTRACT

    Upcoming deep imaging surveys such as the Vera C. Rubin Observatory Legacy Survey of Space and Time will be confronted with challenges that come with increased depth. One of the leading systematic errors in deep surveys is the blending of objects due to higher surface density in the more crowded images; a considerable fraction of the galaxies which we hope to use for cosmology analyses will overlap each other on the observed sky. In order to investigate these challenges, we emulate blending in a mock catalogue consisting of galaxies at a depth equivalent to 1.3 yr of the full 10-yr Rubin Observatory that includes effects due to weak lensing, ground-based seeing, and the uncertainties due to extraction of catalogues from imaging data. The emulated catalogue indicates that approximately 12 per cent of the observed galaxies are ‘unrecognized’ blends that contain two or more objects but are detected as one. Using the positions and shears of half a billion distant galaxies, we compute shear–shear correlation functions after selecting tomographic samples in terms of both spectroscopic and photometric redshift bins. We examine the sensitivity of the cosmological parameter estimation to unrecognized blending employing both jackknife and analytical Gaussian covariance estimators. An ∼0.025 decrease in the derived structure growth parameter S8 = σ8(Ωm/0.3)0.5 is seen due to unrecognized blending in both tomographies with a slight additional bias for the photo-z-based tomography. This bias is greater than the 2σ statistical error in measuring S8.

     
    more » « less
  4. ABSTRACT

    Weak gravitational lensing is one of the most powerful tools for cosmology, while subject to challenges in quantifying subtle systematic biases. The point spread function (PSF) can cause biases in weak lensing shear inference when the PSF model does not match the true PSF that is convolved with the galaxy light profile. Although the effect of PSF size and shape errors – i.e. errors in second moments – is well studied, weak lensing systematics associated with errors in higher moments of the PSF model require further investigation. The goal of our study is to estimate their potential impact for LSST weak lensing analysis. We go beyond second moments of the PSF by using image simulations to relate multiplicative bias in shear to errors in the higher moments of the PSF model. We find that the current level of errors in higher moments of the PSF model in data from the Hyper Suprime-Cam survey can induce a ∼0.05 per cent shear bias, making this effect unimportant for ongoing surveys but relevant at the precision of upcoming surveys such as LSST.

     
    more » « less
  5. ABSTRACT

    Recent works have shown that weak lensing magnification must be included in upcoming large-scale structure analyses, such as for the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), to avoid biasing the cosmological results. In this work, we investigate whether including magnification has a positive impact on the precision of the cosmological constraints, as well as being necessary to avoid bias. We forecast this using an LSST mock catalogue and a halo model to calculate the galaxy power spectra. We find that including magnification has little effect on the precision of the cosmological parameter constraints for an LSST galaxy clustering analysis, where the halo model parameters are additionally constrained by the galaxy luminosity function. In particular, we find that for the LSST gold sample (i < 25.3) including weak lensing magnification only improves the galaxy clustering constraint on Ωm by a factor of 1.03, and when using a very deep LSST mock sample (i < 26.5) by a factor of 1.3. Since magnification predominantly contributes to the clustering measurement and provides similar information to that of cosmic shear, this improvement would be reduced for a combined galaxy clustering and shear analysis. We also confirm that not modelling weak lensing magnification will catastrophically bias the cosmological results from LSST. Magnification must therefore be included in LSST large-scale structure analyses even though it does not significantly enhance the precision of the cosmological constraints.

     
    more » « less
  6. ABSTRACT

    Obtaining accurately calibrated redshift distributions of photometric samples is one of the great challenges in photometric surveys like LSST, Euclid, HSC, KiDS, and DES. We present an inference methodology that combines the redshift information from the galaxy photometry with constraints from two-point functions, utilizing cross-correlations with spatially overlapping spectroscopic samples, and illustrate the approach on CosmoDC2 simulations. Our likelihood framework is designed to integrate directly into a typical large-scale structure and weak lensing analysis based on two-point functions. We discuss efficient and accurate inference techniques that allow us to scale the method to the large samples of galaxies to be expected in LSST. We consider statistical challenges like the parametrization of redshift systematics, discuss and evaluate techniques to regularize the sample redshift distributions, and investigate techniques that can help to detect and calibrate sources of systematic error using posterior predictive checks. We evaluate and forecast photometric redshift performance using data from the CosmoDC2 simulations, within which we mimic a DESI-like spectroscopic calibration sample for cross-correlations. Using a combination of spatial cross-correlations and photometry, we show that we can provide calibration of the mean of the sample redshift distribution to an accuracy of at least 0.002(1 + z), consistent with the LSST-Y1 science requirements for weak lensing and large-scale structure probes.

     
    more » « less
  7. ABSTRACT

    Until recently, mass-mapping techniques for weak gravitational lensing convergence reconstruction have lacked a principled statistical framework upon which to quantify reconstruction uncertainties, without making strong assumptions of Gaussianity. In previous work, we presented a sparse hierarchical Bayesian formalism for convergence reconstruction that addresses this shortcoming. Here, we draw on the concept of local credible intervals (cf. Bayesian error bars) as an extension of the uncertainty quantification techniques previously detailed. These uncertainty quantification techniques are benchmarked against those recovered via Px-MALA – a state-of-the-art proximal Markov chain Monte Carlo (MCMC) algorithm. We find that, typically, our recovered uncertainties are everywhere conservative (never underestimate the uncertainty, yet the approximation error is bounded above), of similar magnitude and highly correlated with those recovered via Px-MALA. Moreover, we demonstrate an increase in computational efficiency of $\mathcal {O}(10^6)$ when using our sparse Bayesian approach over MCMC techniques. This computational saving is critical for the application of Bayesian uncertainty quantification to large-scale stage IV surveys such as LSST and Euclid.

     
    more » « less