skip to main content

This content will become publicly available on December 3, 2022

Title: SDSS-IV DR17: final release of MaNGA PyMorph photometric and deep-learning morphological catalogues
ABSTRACT We present the MaNGA PyMorph photometric Value Added Catalogue (MPP-VAC-DR17) and the MaNGA Deep Learning Morphological VAC (MDLM-VAC-DR17) for the final data release of the MaNGA survey, which is part of the SDSS Data Release 17 (DR17). The MPP-VAC-DR17 provides photometric parameters from Sérsic and Sérsic+Exponential fits to the two-dimensional surface brightness profiles of the MaNGA DR17 galaxy sample in the g, r, and i bands (e.g. total fluxes, half-light radii, bulge-disc fractions, ellipticities, position angles, etc.). The MDLM-VAC-DR17 provides deep-learning-based morphological classifications for the same galaxies. The MDLM-VAC-DR17 includes a number of morphological properties, for example, a T-Type, a finer separation between elliptical and S0, as well as the identification of edge-on and barred galaxies. While the MPP-VAC-DR17 simply extends the MaNGA PyMorph photometric VAC published in the SDSS Data Release 15 (MPP-VAC-DR15) to now include galaxies that were added to make the final DR17, the MDLM-VAC-DR17 implements some changes and improvements compared to the previous release (MDLM-VAC-DR15): Namely, the low end of the T-Types is better recovered in this new version. The catalogue also includes a separation between early or late type, which classifies the two populations in a complementary way to the T-Type, especially at the intermediate more » types (−1 < T-Type < 2), where the T-Type values show a large scatter. In addition, k-fold-based uncertainties on the classifications are also provided. To ensure robustness and reliability, we have also visually inspected all the images. We describe the content of the catalogues and show some interesting ways in which they can be combined. « less
; ; ;
Award ID(s):
Publication Date:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Page Range or eLocation-ID:
4024 to 4036
Sponsoring Org:
National Science Foundation
More Like this

    We present a detailed visual morphological classification for the 4614 MaNGA galaxies in SDSS Data Release 15, using image mosaics generated from a combination of r band (SDSS and deeper DESI Legacy Surveys) images and their digital post-processing. We distinguish 13 Hubble types and identify the presence of bars and bright tidal debris. After correcting the MaNGA sample for volume completeness, we calculate the morphological fractions, the bi-variate distribution of type and stellar mass M* – where we recognize a morphological transition ‘valley’ around S0a-Sa types – and the variations of the g − i colour and luminosity-weighted age over this distribution. We identified bars in 46.8 per cent of galaxies, present in all Hubble types later than S0. This fraction amounts to a factor ∼2 larger when compared with other works for samples in common. We detected 14 per cent of galaxies with tidal features, with the fraction changing with M* and morphology. For 355 galaxies, the classification was uncertain; they are visually faint, mostly of low/intermediate masses, low concentrations, and discy in nature. Our morphological classification agrees well with other works for samples in common, though some particular differences emerge, showing that our image procedures allow us to identify a wealthmore »of added value information as compared to SDSS-based previous estimates. Based on our classification, we also propose an alternative criteria for the E–S0 separation, in the structural semimajor to semiminor axis versus bulge to total light ratio (b/a − B/T) and concentration versus semimajor to semiminor axis (C − b/a) space.

    « less
  2. Abstract This paper documents the seventeenth data release (DR17) from the Sloan Digital Sky Surveys; the fifth and final release from the fourth phase (SDSS-IV). DR17 contains the complete release of the Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) survey, which reached its goal of surveying over 10,000 nearby galaxies. The complete release of the MaNGA Stellar Library accompanies this data, providing observations of almost 30,000 stars through the MaNGA instrument during bright time. DR17 also contains the complete release of the Apache Point Observatory Galactic Evolution Experiment 2 survey that publicly releases infrared spectra of over 650,000 stars. The main sample from the Extended Baryon Oscillation Spectroscopic Survey (eBOSS), as well as the subsurvey Time Domain Spectroscopic Survey data were fully released in DR16. New single-fiber optical spectroscopy released in DR17 is from the SPectroscipic IDentification of ERosita Survey subsurvey and the eBOSS-RM program. Along with the primary data sets, DR17 includes 25 new or updated value-added catalogs. This paper concludes the release of SDSS-IV survey data. SDSS continues into its fifth phase with observations already underway for the Milky Way Mapper, Local Volume Mapper, and Black Hole Mapper surveys.
  3. Abstract We present morphological classifications of ∼27 million galaxies from the Dark Energy Survey (DES) Data Release 1 (DR1) using a supervised deep learning algorithm. The classification scheme separates: (a) early-type galaxies (ETGs) from late-types (LTGs), and (b) face-on galaxies from edge-on. Our Convolutional Neural Networks (CNNs) are trained on a small subset of DES objects with previously known classifications. These typically have mr ≲ 17.7mag; we model fainter objects to mr < 21.5 mag by simulating what the brighter objects with well determined classifications would look like if they were at higher redshifts. The CNNs reach 97% accuracy to mr < 21.5 on their training sets, suggesting that they are able to recover features more accurately than the human eye. We then used the trained CNNs to classify the vast majority of the other DES images. The final catalog comprises five independent CNN predictions for each classification scheme, helping to determine if the CNN predictions are robust or not. We obtain secure classifications for ∼ 87% and 73% of the catalog for the ETG vs. LTG and edge-on vs. face-on models, respectively. Combining the two classifications (a) and (b) helps to increase the purity of the ETG sample andmore »to identify edge-on lenticular galaxies (as ETGs with high ellipticity). Where a comparison is possible, our classifications correlate very well with Sérsic index (n), ellipticity (ε) and spectral type, even for the fainter galaxies. This is the largest multi-band catalog of automated galaxy morphologies to date.« less

    Studies of cosmology, galaxy evolution, and astronomical transients with current and next-generation wide-field imaging surveys like the Rubin Observatory Legacy Survey of Space and Time are all critically dependent on estimates of photometric redshifts. Capsule networks are a new type of neural network architecture that is better suited for identifying morphological features of the input images than traditional convolutional neural networks. We use a deep capsule network trained on ugriz images, spectroscopic redshifts, and Galaxy Zoo spiral/elliptical classifications of ∼400 000 Sloan Digital Sky Survey galaxies to do photometric redshift estimation. We achieve a photometric redshift prediction accuracy and a fraction of catastrophic outliers that are comparable to or better than current methods for SDSS main galaxy sample-like data sets (r ≤ 17.8 and zspec ≤ 0.4) while requiring less data and fewer trainable parameters. Furthermore, the decision-making of our capsule network is much more easily interpretable as capsules act as a low-dimensional encoding of the image. When the capsules are projected on a two-dimensional manifold, they form a single redshift sequence with the fraction of spirals in a region exhibiting a gradient roughly perpendicular to the redshift sequence. We perturb encodings of real galaxy images in this low-dimensional spacemore »to create synthetic galaxy images that demonstrate the image properties (e.g. size, orientation, and surface brightness) encoded by each dimension. We also measure correlations between galaxy properties (e.g. magnitudes, colours, and stellar mass) and each capsule dimension. We publicly release our code, estimated redshifts, and additional catalogues at

    « less

    Galaxy morphology is a fundamental quantity, which is essential not only for the full spectrum of galaxy-evolution studies, but also for a plethora of science in observational cosmology (e.g. as a prior for photometric-redshift measurements and as contextual data for transient light-curve classifications). While a rich literature exists on morphological-classification techniques, the unprecedented data volumes, coupled, in some cases, with the short cadences of forthcoming ‘Big-Data’ surveys (e.g. from the LSST), present novel challenges for this field. Large data volumes make such data sets intractable for visual inspection (even via massively distributed platforms like Galaxy Zoo), while short cadences make it difficult to employ techniques like supervised machine learning, since it may be impractical to repeatedly produce training sets on short time-scales. Unsupervised machine learning, which does not require training sets, is ideally suited to the morphological analysis of new and forthcoming surveys. Here, we employ an algorithm that performs clustering of graph representations, in order to group image patches with similar visual properties and objects constructed from those patches, like galaxies. We implement the algorithm on the Hyper-Suprime-Cam Subaru-Strategic-Program Ultra-Deep survey, to autonomously reduce the galaxy population to a small number (160) of ‘morphological clusters’, populated by galaxiesmore »with similar morphologies, which are then benchmarked using visual inspection. The morphological classifications (which we release publicly) exhibit a high level of purity, and reproduce known trends in key galaxy properties as a function of morphological type at z < 1 (e.g. stellar-mass functions, rest-frame colours, and the position of galaxies on the star-formation main sequence). Our study demonstrates the power of unsupervised machine learning in performing accurate morphological analysis, which will become indispensable in this new era of deep-wide surveys.

    « less