skip to main content


Title: The Sensitivity of GPz Estimates of Photo-z Posterior PDFs to Realistically Complex Training Set Imperfections
Abstract The accurate estimation of photometric redshifts is crucial to many upcoming galaxy surveys, for example, the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). Almost all Rubin extragalactic and cosmological science requires accurate and precise calculation of photometric redshifts; many diverse approaches to this problem are currently in the process of being developed, validated, and tested. In this work, we use the photometric redshift code GPz to examine two realistically complex training set imperfections scenarios for machine learning based photometric redshift calculation: (i) where the spectroscopic training set has a very different distribution in color–magnitude space to the test set, and (ii) where the effect of emission line confusion causes a fraction of the training spectroscopic sample to not have the true redshift. By evaluating the sensitivity of GPz to a range of increasingly severe imperfections, with a range of metrics (both of photo- z point estimates as well as posterior probability distribution functions, PDFs), we quantify the degree to which predictions get worse with higher degrees of degradation. In particular, we find that there is a substantial drop-off in photo- z quality when line-confusion goes above ∼1%, and sample incompleteness below a redshift of 1.5, for an experimental setup using data from the Buzzard Flock synthetic sky catalogs.  more » « less
Award ID(s):
1715122
NSF-PAR ID:
10382166
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Publications of the Astronomical Society of the Pacific
Volume:
134
Issue:
1034
ISSN:
0004-6280
Page Range / eLocation ID:
044501
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT

    Knowing the redshift of galaxies is one of the first requirements of many cosmological experiments, and as it is impossible to perform spectroscopy for every galaxy being observed, photometric redshift (photo-z) estimations are still of particular interest. Here, we investigate different deep learning methods for obtaining photo-z estimates directly from images, comparing these with ‘traditional’ machine learning algorithms which make use of magnitudes retrieved through photometry. As well as testing a convolutional neural network (CNN) and inception-module CNN, we introduce a novel mixed-input model that allows for both images and magnitude data to be used in the same model as a way of further improving the estimated redshifts. We also perform benchmarking as a way of demonstrating the performance and scalability of the different algorithms. The data used in the study comes entirely from the Sloan Digital Sky Survey (SDSS) from which 1 million galaxies were used, each having 5-filtre (ugriz) images with complete photometry and a spectroscopic redshift which was taken as the ground truth. The mixed-input inception CNN achieved a mean squared error (MSE) =0.009, which was a significant improvement ($30{{\ \rm per\ cent}}$) over the traditional random forest (RF), and the model performed even better at lower redshifts achieving a MSE = 0.0007 (a $50{{\ \rm per\ cent}}$ improvement over the RF) in the range of z < 0.3. This method could be hugely beneficial to upcoming surveys, such as Euclid and the Vera C. Rubin Observatory’s Legacy Survey of Space and Time (LSST), which will require vast numbers of photo-z estimates produced as quickly and accurately as possible.

     
    more » « less
  2. ABSTRACT With the launch of eROSITA (extended Roentgen Survey with an Imaging Telescope Array), successfully occurred on 2019 July 13, we are facing the challenge of computing reliable photometric redshifts for 3 million of active galactic nuclei (AGNs) over the entire sky, having available only patchy and inhomogeneous ancillary data. While we have a good understanding of the photo-z quality obtainable for AGN using spectral energy distribution (SED)-fitting technique, we tested the capability of machine learning (ML), usually reliable in computing photo-z for QSO in wide and shallow areas with rich spectroscopic samples. Using MLPQNA as example of ML, we computed photo-z for the X-ray-selected sources in Stripe 82X, using the publicly available photometric and spectroscopic catalogues. Stripe 82X is at least as deep as eROSITA will be and wide enough to include also rare and bright AGNs. In addition, the availability of ancillary data mimics what can be available in the whole sky. We found that when optical, and near- and mid-infrared data are available, ML and SED fitting perform comparably well in terms of overall accuracy, realistic redshift probability density functions, and fraction of outliers, although they are not the same for the two methods. The results could further improve if the photometry available is accurate and including morphological information. Assuming that we can gather sufficient spectroscopy to build a representative training sample, with the current photometry coverage we can obtain reliable photo-z for a large fraction of sources in the Southern hemisphere well before the spectroscopic follow-up, thus timely enabling the eROSITA science return. The photo-z catalogue is released here. 
    more » « less
  3. ABSTRACT

    Three-dimensional wide-field galaxy surveys are fundamental for cosmological studies. For higher redshifts (z ≳ 1.0), where galaxies are too faint, quasars still trace the large-scale structure of the Universe. Since available telescope time limits spectroscopic surveys, photometric methods are efficient for estimating redshifts for many quasars. Recently, machine-learning methods are increasingly successful for quasar photometric redshifts, however, they hinge on the distribution of the training set. Therefore, a rigorous estimation of reliability is critical. We extracted optical and infrared photometric data from the cross-matched catalogue of the WISE All-Sky and PS1 3$\pi$ DR2 sky surveys. We trained an XGBoost regressor and an artificial neural network on the relation between colour indices and spectroscopic redshift. We approximated the effective training set coverage with the K-nearest neighbours algorithm. We estimated reliable photometric redshifts of 2 562 878 quasars which overlap with the training set in feature space. We validated the derived redshifts with an independent, clustering-based redshift estimation technique. The final catalogue is publicly available.

     
    more » « less
  4. Abstract We present the Young Supernova Experiment Data Release 1 (YSE DR1), comprised of processed multicolor PanSTARRS1 griz and Zwicky Transient Facility (ZTF) gr photometry of 1975 transients with host–galaxy associations, redshifts, spectroscopic and/or photometric classifications, and additional data products from 2019 November 24 to 2021 December 20. YSE DR1 spans discoveries and observations from young and fast-rising supernovae (SNe) to transients that persist for over a year, with a redshift distribution reaching z ≈ 0.5. We present relative SN rates from YSE’s magnitude- and volume-limited surveys, which are consistent with previously published values within estimated uncertainties for untargeted surveys. We combine YSE and ZTF data, and create multisurvey SN simulations to train the ParSNIP and SuperRAENN photometric classification algorithms; when validating our ParSNIP classifier on 472 spectroscopically classified YSE DR1 SNe, we achieve 82% accuracy across three SN classes (SNe Ia, II, Ib/Ic) and 90% accuracy across two SN classes (SNe Ia, core-collapse SNe). Our classifier performs particularly well on SNe Ia, with high (>90%) individual completeness and purity, which will help build an anchor photometric SNe Ia sample for cosmology. We then use our photometric classifier to characterize our photometric sample of 1483 SNe, labeling 1048 (∼71%) SNe Ia, 339 (∼23%) SNe II, and 96 (∼6%) SNe Ib/Ic. YSE DR1 provides a training ground for building discovery, anomaly detection, and classification algorithms, performing cosmological analyses, understanding the nature of red and rare transients, exploring tidal disruption events and nuclear variability, and preparing for the forthcoming Vera C. Rubin Observatory Legacy Survey of Space and Time. 
    more » « less
  5. Abstract

    Upcoming photometric surveys will discover tens of thousands of Type Ia supernovae (SNe Ia), vastly outpacing the capacity of our spectroscopic resources. In order to maximize the scientific return of these observations in the absence of spectroscopic information, we must accurately extract key parameters, such as SN redshifts, with photometric information alone. We present Photo-zSNthesis, a convolutional neural network-based method for predicting full redshift probability distributions from multi-band supernova lightcurves, tested on both simulated Sloan Digital Sky Survey (SDSS) and Vera C. Rubin Legacy Survey of Space and Time data as well as observed SDSS SNe. We show major improvements over predictions from existing methods on both simulations and real observations as well as minimal redshift-dependent bias, which is a challenge due to selection effects, e.g., Malmquist bias. Specifically, we show a 61× improvement in prediction bias 〈Δz〉 on PLAsTiCC simulations and 5× improvement on real SDSS data compared to results from a widely used photometric redshift estimator, LCFIT+Z. The PDFs produced by this method are well constrained and will maximize the cosmological constraining power of photometric SNe Ia samples.

     
    more » « less