skip to main content


Search for: All records

Creators/Authors contains: "The LSST Dark Energy Science Collaboration"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Large imaging surveys will rely on photometric redshifts (photo-z's), which are typically estimated through machine-learning methods. Currently planned spectroscopic surveys will not be deep enough to produce a representative training sample for Legacy Survey of Space and Time (LSST), so we seek methods to improve the photo-zestimates that arise from nonrepresentative training samples. Spectroscopic training samples for photo-z's are biased toward redder, brighter galaxies, which also tend to be at lower redshift than the typical galaxy observed by LSST, leading to poor photo-zestimates with outlier fractions nearly 4 times larger than for a representative training sample. In this Letter, we apply the concept of training sample augmentation, where we augment simulated nonrepresentative training samples with simulated galaxies possessing otherwise unrepresented features. When we select simulated galaxies with (g-z) color,i-band magnitude, and redshift outside the range of the original training sample, we are able to reduce the outlier fraction of the photo-zestimates for simulated LSST data by nearly 50% and the normalized median absolute deviation (NMAD) by 56%. When compared to a fully representative training sample, augmentation can recover nearly 70% of the degradation in the outlier fraction and 80% of the degradation in NMAD. Training sample augmentation is a simple and effective way to improve training samples for photo-z's without requiring additional spectroscopic samples.

     
    more » « less
  2. Abstract

    We present a study of the potential for convolutional neural networks (CNNs) to enable separation of astrophysical transients from image artifacts, a task known as “real–bogus” classification, without requiring a template-subtracted (or difference) image, which requires a computationally expensive process to generate, involving image matching on small spatial scales in large volumes of data. Using data from the Dark Energy Survey, we explore the use of CNNs to (1) automate the real–bogus classification and (2) reduce the computational costs of transient discovery. We compare the efficiency of two CNNs with similar architectures, one that uses “image triplets” (templates, search, and difference image) and one that takes as input the template and search only. We measure the decrease in efficiency associated with the loss of information in input, finding that the testing accuracy is reduced from ∼96% to ∼91.1%. We further investigate how the latter model learns the required information from the template and search by exploring the saliency maps. Our work (1) confirms that CNNs are excellent models for real–bogus classification that rely exclusively on the imaging data and require no feature engineering task and (2) demonstrates that high-accuracy (>90%) models can be built without the need to construct difference images, but some accuracy is lost. Because, once trained, neural networks can generate predictions at minimal computational costs, we argue that future implementations of this methodology could dramatically reduce the computational costs in the detection of transients in synoptic surveys like Rubin Observatory's Legacy Survey of Space and Time by bypassing the difference image analysis entirely.

     
    more » « less
  3. ABSTRACT

    As we observe a rapidly growing number of astrophysical transients, we learn more about the diverse host galaxy environments in which they occur. Host galaxy information can be used to purify samples of cosmological Type Ia supernovae, uncover the progenitor systems of individual classes, and facilitate low-latency follow-up of rare and peculiar explosions. In this work, we develop a novel data-driven methodology to simulate the time-domain sky that includes detailed modelling of the probability density function for multiple transient classes conditioned on host galaxy magnitudes, colours, star formation rates, and masses. We have designed these simulations to optimize photometric classification and analysis in upcoming large synoptic surveys. We integrate host galaxy information into the snana simulation framework to construct the simulated catalogue of optical transients and correlated hosts (SCOTCH, a publicly available catalogue of 5-million idealized transient light curves in LSST passbands and their host galaxy properties over the redshift range 0 < z < 3. This catalogue includes supernovae, tidal disruption events, kilonovae, and active galactic nuclei. Each light curve consists of true top-of-the-galaxy magnitudes sampled with high (≲2 d) cadence. In conjunction with SCOTCH, we also release an associated set of tutorials and transient-specific libraries to enable simulations of arbitrary space- and ground-based surveys. Our methodology is being used to test critical science infrastructure in advance of surveys by the Vera C. Rubin Observatory and the Nancy G. Roman Space Telescope.

     
    more » « less
  4. ABSTRACT

    Weak gravitational lensing is one of the most powerful tools for cosmology, while subject to challenges in quantifying subtle systematic biases. The point spread function (PSF) can cause biases in weak lensing shear inference when the PSF model does not match the true PSF that is convolved with the galaxy light profile. Although the effect of PSF size and shape errors – i.e. errors in second moments – is well studied, weak lensing systematics associated with errors in higher moments of the PSF model require further investigation. The goal of our study is to estimate their potential impact for LSST weak lensing analysis. We go beyond second moments of the PSF by using image simulations to relate multiplicative bias in shear to errors in the higher moments of the PSF model. We find that the current level of errors in higher moments of the PSF model in data from the Hyper Suprime-Cam survey can induce a ∼0.05 per cent shear bias, making this effect unimportant for ongoing surveys but relevant at the precision of upcoming surveys such as LSST.

     
    more » « less