skip to main content

Title: Pushing automated morphological classifications to their limits with the Dark Energy Survey
Abstract We present morphological classifications of ∼27 million galaxies from the Dark Energy Survey (DES) Data Release 1 (DR1) using a supervised deep learning algorithm. The classification scheme separates: (a) early-type galaxies (ETGs) from late-types (LTGs), and (b) face-on galaxies from edge-on. Our Convolutional Neural Networks (CNNs) are trained on a small subset of DES objects with previously known classifications. These typically have mr ≲ 17.7mag; we model fainter objects to mr < 21.5 mag by simulating what the brighter objects with well determined classifications would look like if they were at higher redshifts. The CNNs reach 97% accuracy to mr < 21.5 on their training sets, suggesting that they are able to recover features more accurately than the human eye. We then used the trained CNNs to classify the vast majority of the other DES images. The final catalog comprises five independent CNN predictions for each classification scheme, helping to determine if the CNN predictions are robust or not. We obtain secure classifications for ∼ 87% and 73% of the catalog for the ETG vs. LTG and edge-on vs. face-on models, respectively. Combining the two classifications (a) and (b) helps to increase the purity of the ETG sample and more » to identify edge-on lenticular galaxies (as ETGs with high ellipticity). Where a comparison is possible, our classifications correlate very well with Sérsic index (n), ellipticity (ε) and spectral type, even for the fainter galaxies. This is the largest multi-band catalog of automated galaxy morphologies to date. « less
Authors:
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Award ID(s):
1816330
Publication Date:
NSF-PAR ID:
10234818
Journal Name:
Monthly Notices of the Royal Astronomical Society
ISSN:
0035-8711
Sponsoring Org:
National Science Foundation
More Like this
  1. Osteoarthritis (OA) is the most common form of arthritis and can often occur in the knee. While convolutional neural networks (CNNs) have been widely used to study medical images, the application of a 3-dimensional (3D) CNN in knee OA diagnosis is limited. This study utilizes a 3D CNN model to analyze sequences of knee magnetic resonance (MR) images to perform knee OA classification. An advantage of using 3D CNNs is the ability to analyze the whole sequence of 3D MR images as a single unit as opposed to a traditional 2D CNN, which examines one image at a time. Therefore, 3D features could be extracted from adjacent slices, which may not be detectable from a single 2D image. The input data for each knee were a sequence of double-echo steady-state (DESS) MR images, and each knee was labeled by the Kellgren and Lawrence (KL) grade of severity at levels 0–4. In addition to the 5-category KL grade classification, we further examined a 2-category classification that distinguishes non-OA (KL ≤ 1) from OA (KL ≥ 2) knees. Clinically, diagnosing a patient with knee OA is the ultimate goal of assigning a KL grade. On a dataset with 1100 knees, the 3Dmore »CNN model that classifies knees with and without OA achieved an accuracy of 86.5% on the validation set and 83.0% on the testing set. We further conducted a comparative study between MRI and X-ray. Compared with a CNN model using X-ray images trained from the same group of patients, the proposed 3D model with MR images achieved higher accuracy in both the 5-category classification (54.0% vs. 50.0%) and the 2-category classification (83.0% vs. 77.0%). The result indicates that MRI, with the application of a 3D CNN model, has greater potential to improve diagnosis accuracy for knee OA clinically than the currently used X-ray methods.« less
  2. ABSTRACT We present the MaNGA PyMorph photometric Value Added Catalogue (MPP-VAC-DR17) and the MaNGA Deep Learning Morphological VAC (MDLM-VAC-DR17) for the final data release of the MaNGA survey, which is part of the SDSS Data Release 17 (DR17). The MPP-VAC-DR17 provides photometric parameters from Sérsic and Sérsic+Exponential fits to the two-dimensional surface brightness profiles of the MaNGA DR17 galaxy sample in the g, r, and i bands (e.g. total fluxes, half-light radii, bulge-disc fractions, ellipticities, position angles, etc.). The MDLM-VAC-DR17 provides deep-learning-based morphological classifications for the same galaxies. The MDLM-VAC-DR17 includes a number of morphological properties, for example, a T-Type, a finer separation between elliptical and S0, as well as the identification of edge-on and barred galaxies. While the MPP-VAC-DR17 simply extends the MaNGA PyMorph photometric VAC published in the SDSS Data Release 15 (MPP-VAC-DR15) to now include galaxies that were added to make the final DR17, the MDLM-VAC-DR17 implements some changes and improvements compared to the previous release (MDLM-VAC-DR15): Namely, the low end of the T-Types is better recovered in this new version. The catalogue also includes a separation between early or late type, which classifies the two populations in a complementary way to the T-Type, especially at the intermediatemore »types (−1 < T-Type < 2), where the T-Type values show a large scatter. In addition, k-fold-based uncertainties on the classifications are also provided. To ensure robustness and reliability, we have also visually inspected all the images. We describe the content of the catalogues and show some interesting ways in which they can be combined.« less
  3. In the past decade, deep neural networks, and specifically convolutional neural networks (CNNs), have been becoming a primary tool in the field of biomedical image analysis, and are used intensively in other fields such as object or face recognition. CNNs have a clear advantage in their ability to provide superior performance, yet without the requirement to fully understand the image elements that reflect the biomedical problem at hand, and without designing specific algorithms for that task. The availability of easy-to-use libraries and their non-parametric nature make CNN the most common solution to problems that require automatic biomedical image analysis. But while CNNs have many advantages, they also have certain downsides. The features determined by CNNs are complex and unintuitive, and therefore CNNs often work as a “Black Box”. Additionally, CNNs learn from any piece of information in the pixel data that can provide a discriminative signal, making it more difficult to control what the CNN actually learns. Here we follow common practices to test whether CNNs can classify biomedical image datasets, but instead of using the entire image we use merely parts of the images that do not have biomedical content. The experiments show that CNNs can provide high classificationmore »accuracy even when they are trained with datasets that do not contain any biomedical information, or can be systematically biased by irrelevant information in the image data. The presence of such consistent irrelevant data is difficult to identify, and can therefore lead to biased experimental results. Possible solutions to this downside of CNNs can be control experiments, as well as other protective practices to validate the results and avoid biased conclusions based on CNN-generated annotations.« less
  4. ABSTRACT The 5-yr Dark Energy Survey Supernova Programme (DES-SN) is one of the largest and deepest transient surveys to date in terms of volume and number of supernovae. Identifying and characterizing the host galaxies of transients plays a key role in their classification, the study of their formation mechanisms, and the cosmological analyses. To derive accurate host galaxy properties, we create depth-optimized coadds using single-epoch DES-SN images that are selected based on sky and atmospheric conditions. For each of the five DES-SN seasons, a separate coadd is made from the other four seasons such that each SN has a corresponding deep coadd with no contaminating SN emission. The coadds reach limiting magnitudes of order ∼27 in g band, and have a much smaller magnitude uncertainty than the previous DES-SN host templates, particularly for faint objects. We present the resulting multiband photometry of host galaxies for samples of spectroscopically confirmed type Ia (SNe Ia), core-collapse (CCSNe), and superluminous (SLSNe) as well as rapidly evolving transients (RETs) discovered by DES-SN. We derive host galaxy stellar masses and probabilistically compare stellar-mass distributions to samples from other surveys. We find that the DES spectroscopically confirmed sample of SNe Ia selects preferentially fewer high-mass hostsmore »at high-redshift compared to other surveys, while at low redshift the distributions are consistent. DES CCSNe and SLSNe hosts are similar to other samples, while RET hosts are unlike the hosts of any other transients, although these differences have not been disentangled from selection effects.« less
  5. Abstract We describe an updated calibration and diagnostic framework, Balrog , used to directly sample the selection and photometric biases of the Dark Energy Survey (DES) Year 3 (Y3) data set. We systematically inject onto the single-epoch images of a random 20% subset of the DES footprint an ensemble of nearly 30 million realistic galaxy models derived from DES Deep Field observations. These augmented images are analyzed in parallel with the original data to automatically inherit measurement systematics that are often too difficult to capture with generative models. The resulting object catalog is a Monte Carlo sampling of the DES transfer function and is used as a powerful diagnostic and calibration tool for a variety of DES Y3 science, particularly for the calibration of the photometric redshifts of distant “source” galaxies and magnification biases of nearer “lens” galaxies. The recovered Balrog injections are shown to closely match the photometric property distributions of the Y3 GOLD catalog, particularly in color, and capture the number density fluctuations from observing conditions of the real data within 1% for a typical galaxy sample. We find that Y3 colors are extremely well calibrated, typically within ∼1–8 mmag, but for a small subset of objects, wemore »detect significant magnitude biases correlated with large overestimates of the injected object size due to proximity effects and blending. We discuss approaches to extend the current methodology to capture more aspects of the transfer function and reach full coverage of the survey footprint for future analyses.« less