skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Fast and efficient identification of anomalous galaxy spectra with neural density estimation
ABSTRACT

Current large-scale astrophysical experiments produce unprecedented amounts of rich and diverse data. This creates a growing need for fast and flexible automated data inspection methods. Deep learning algorithms can capture and pick up subtle variations in rich data sets and are fast to apply once trained. Here, we study the applicability of an unsupervised and probabilistic deep learning framework, the probabilistic auto-encoder, to the detection of peculiar objects in galaxy spectra from the SDSS survey. Different to supervised algorithms, this algorithm is not trained to detect a specific feature or type of anomaly, instead it learns the complex and diverse distribution of galaxy spectra from training data and identifies outliers with respect to the learned distribution. We find that the algorithm assigns consistently lower probabilities (higher anomaly score) to spectra that exhibit unusual features. For example, the majority of outliers among quiescent galaxies are E+A galaxies, whose spectra combine features from old and young stellar population. Other identified outliers include LINERs, supernovae, and overlapping objects. Conditional modelling further allows us to incorporate additional information. Namely, we evaluate the probability of an object being anomalous given a certain spectral class, but other information such as metrics of data quality or estimated redshift could be incorporated as well. We make our code publicly available.

 
more » « less
PAR ID:
10468402
Author(s) / Creator(s):
; ;
Publisher / Repository:
Oxford University Press
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
526
Issue:
2
ISSN:
0035-8711
Format(s):
Medium: X Size: p. 3072-3087
Size(s):
p. 3072-3087
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    ABSTRACT Rare extragalactic objects can carry substantial information about the past, present, and future universe. Given the size of astronomical data bases in the information era, it can be assumed that very many outlier galaxies are included in existing and future astronomical data bases. However, manual search for these objects is impractical due to the required labour, and therefore the ability to detect such objects largely depends on computer algorithms. This paper describes an unsupervised machine learning algorithm for automatic detection of outlier galaxy images, and its application to several Hubble Space Telescope fields. The algorithm does not require training, and therefore is not dependent on the preparation of clean training sets. The application of the algorithm to a large collection of galaxies detected a variety of outlier galaxy images. The algorithm is not perfect in the sense that not all objects detected by the algorithm are indeed considered outliers, but it reduces the data set by two orders of magnitude to allow practical manual identification. The catalogue contains 147 objects that would be very difficult to identify without using automation. 
    more » « less
  2. Abstract

    We present the Swimmy (Subaru WIde-field Machine-learning anoMalY) survey program, a deep-learning-based search for unique sources using multicolored (grizy) imaging data from the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP). This program aims to detect unexpected, novel, and rare populations and phenomena, by utilizing the deep imaging data acquired from the wide-field coverage of the HSC-SSP. This article, as the first paper in the Swimmy series, describes an anomaly detection technique to select unique populations as “outliers” from the data-set. The model was tested with known extreme emission-line galaxies (XELGs) and quasars, which consequently confirmed that the proposed method successfully selected $\sim\!\! 60\%$–$70\%$ of the quasars and $60\%$ of the XELGs without labeled training data. In reference to the spectral information of local galaxies at z = 0.05–0.2 obtained from the Sloan Digital Sky Survey, we investigated the physical properties of the selected anomalies and compared them based on the significance of their outlier values. The results revealed that XELGs constitute notable fractions of the most anomalous galaxies, and certain galaxies manifest unique morphological features. In summary, deep anomaly detection is an effective tool that can search rare objects, and, ultimately, unknown unknowns with large data-sets. Further development of the proposed model and selection process can promote the practical applications required to achieve specific scientific goals.

     
    more » « less
  3. Abstract

    The inverse problem of extracting the stellar population content of galaxy spectra is analysed here from a basic standpoint based on information theory. By interpreting spectra as probability distribution functions, we find that galaxy spectra have high entropy, thus leading to a rather low effective information content. The highest variation in entropy is unsurprisingly found in regions that have been well studied for decades with the conventional approach. We target a set of six spectral regions that show the highest variation in entropy – the 4000 Å break being the most informative one. As a test case with real data, we measure the entropy of a set of high-quality spectra from the Sloan Digital Sky Survey, and contrast entropy-based results with the traditional method based on line strengths. The data are classified into star-forming (SF), quiescent (Q), and active galactic nucleus (AGN) galaxies, and show – independently of any physical model – that AGN spectra can be interpreted as a transition between SF and Q galaxies, with SF galaxies featuring a more diverse variation in entropy. The high level of entanglement complicates the determination of population parameters in a robust, unbiased way, and affects traditional methods that compare models with observations, as well as machine learning (especially deep learning) algorithms that rely on the statistical properties of the data to assess the variations among spectra. Entropy provides a new avenue to improve population synthesis models so that they give a more faithful representation of real galaxy spectra.

     
    more » « less
  4. null (Ed.)
    Abstract We present morphological classifications of ∼27 million galaxies from the Dark Energy Survey (DES) Data Release 1 (DR1) using a supervised deep learning algorithm. The classification scheme separates: (a) early-type galaxies (ETGs) from late-types (LTGs), and (b) face-on galaxies from edge-on. Our Convolutional Neural Networks (CNNs) are trained on a small subset of DES objects with previously known classifications. These typically have mr ≲ 17.7mag; we model fainter objects to mr < 21.5 mag by simulating what the brighter objects with well determined classifications would look like if they were at higher redshifts. The CNNs reach 97% accuracy to mr < 21.5 on their training sets, suggesting that they are able to recover features more accurately than the human eye. We then used the trained CNNs to classify the vast majority of the other DES images. The final catalog comprises five independent CNN predictions for each classification scheme, helping to determine if the CNN predictions are robust or not. We obtain secure classifications for ∼ 87% and 73% of the catalog for the ETG vs. LTG and edge-on vs. face-on models, respectively. Combining the two classifications (a) and (b) helps to increase the purity of the ETG sample and to identify edge-on lenticular galaxies (as ETGs with high ellipticity). Where a comparison is possible, our classifications correlate very well with Sérsic index (n), ellipticity (ε) and spectral type, even for the fainter galaxies. This is the largest multi-band catalog of automated galaxy morphologies to date. 
    more » « less
  5. With the advent of big data and the popularity of black-box deep learning methods, it is imperative to address the robustness of neural networks to noise and outliers. We propose the use of Winsorization to recover model performances when the data may have outliers and other aberrant observations. We provide a comparative analysis of several probabilistic artificial intelligence and machine learning techniques for supervised learning case studies. Broadly, Winsorization is a versatile technique for accounting for outliers in data. However, different probabilistic machine learning techniques have different levels of efficiency when used on outlier-prone data, with or without Winsorization. We notice that Gaussian processes are extremely vulnerable to outliers, while deep learning techniques in general are more robust.

     
    more » « less