skip to main content

Attention:

The NSF Public Access Repository (NSF-PAR) system and access will be unavailable from 11:00 PM ET on Friday, September 13 until 2:00 AM ET on Saturday, September 14 due to maintenance. We apologize for the inconvenience.


Title: Finding quadruply imaged quasars with machine learning – I. Methods
ABSTRACT Strongly lensed quadruply imaged quasars (quads) are extraordinary objects. They are very rare in the sky and yet they provide unique information about a wide range of topics, including the expansion history and the composition of the Universe, the distribution of stars and dark matter in galaxies, the host galaxies of quasars, and the stellar initial mass function. Finding them in astronomical images is a classic ‘needle in a haystack’ problem, as they are outnumbered by other (contaminant) sources by many orders of magnitude. To solve this problem, we develop state-of-the-art deep learning methods and train them on realistic simulated quads based on real images of galaxies taken from the Dark Energy Survey, with realistic source and deflector models, including the chromatic effects of microlensing. The performance of the best methods on a mixture of simulated and real objects is excellent, yielding area under the receiver operating curve in the range of 0.86–0.89. Recall is close to 100 per cent down to total magnitude i ∼ 21 indicating high completeness, while precision declines from 85 per cent to 70 per cent in the range i ∼ 17–21. The methods are extremely fast: training on 2 million samples takes 20 h on a GPU machine, and 108 multiband cut-outs can be evaluated per GPU-hour. The speed and performance of the method pave the way to apply it to large samples of astronomical sources, bypassing the need for photometric pre-selection that is likely to be a major cause of incompleteness in current samples of known quads.  more » « less
Award ID(s):
1906976
NSF-PAR ID:
10337831
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; more » ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; ; « less
Date Published:
Journal Name:
Monthly Notices of the Royal Astronomical Society
Volume:
513
Issue:
2
ISSN:
0035-8711
Page Range / eLocation ID:
2407 to 2421
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT We report the results of the STRong lensing Insights into the Dark Energy Survey (STRIDES) follow-up campaign of the late 2017/early 2018 season. We obtained spectra of 65 lensed quasar candidates with ESO Faint Object Spectrograph and Camera 2 on the NTT and Echellette Spectrograph and Imager on Keck, confirming 10 new lensed quasars and 10 quasar pairs. Eight lensed quasars are doubly imaged with source redshifts between 0.99 and 2.90, one is triply imaged (DESJ0345−2545, z = 1.68), and one is quadruply imaged (quad: DESJ0053−2012, z = 3.8). Singular isothermal ellipsoid models for the doubles, based on high-resolution imaging from SAMI on Southern Astrophysical Research Telescope or Near InfraRed Camera 2 on Keck, give total magnifications between 3.2 and 5.6, and Einstein radii between 0.49 and 1.97 arcsec. After spectroscopic follow-up, we extract multi-epoch grizY photometry of confirmed lensed quasars and contaminant quasar + star pairs from DES data using parametric multiband modelling, and compare variability in each system’s components. By measuring the reduced χ2 associated with fitting all epochs to the same magnitude, we find a simple cut on the less variable component that retains all confirmed lensed quasars, while removing 94 per cent of contaminant systems. Based on our spectroscopic follow-up, this variability information improves selection of lensed quasars and quasar pairs from 34-45 per cent to 51–70 per cent, with most remaining contaminants being star-forming galaxies. Using mock lensed quasar light curves we demonstrate that selection based only on variability will over-represent the quad fraction by 10 per cent over a complete DES magnitude-limited sample, explained by the magnification bias and hence lower luminosity/more variable sources in quads. 
    more » « less
  2. ABSTRACT

    We apply a new deep learning technique to detect, classify, and deblend sources in multiband astronomical images. We train and evaluate the performance of an artificial neural network built on the Mask Region-based Convolutional Neural Network image processing framework, a general code for efficient object detection, classification, and instance segmentation. After evaluating the performance of our network against simulated ground truth images for star and galaxy classes, we find a precision of 92 per cent at 80 per cent recall for stars and a precision of 98 per cent at 80 per cent recall for galaxies in a typical field with ∼30 galaxies arcmin−2. We investigate the deblending capability of our code, and find that clean deblends are handled robustly during object masking, even for significantly blended sources. This technique, or extensions using similar network architectures, may be applied to current and future deep imaging surveys such as Large Synoptic Survey Telescope and Wide-Field Infrared Survey Telescope. Our code, astro r-cnn, is publicly available at https://github.com/burke86/astro_rcnn.

     
    more » « less
  3. ABSTRACT

    The next generation of wide-field deep astronomical surveys will deliver unprecedented amounts of images through the 2020s and beyond. As both the sensitivity and depth of observations increase, more blended sources will be detected. This reality can lead to measurement biases that contaminate key astronomical inferences. We implement new deep learning models available through Facebook AI Research’s detectron2 repository to perform the simultaneous tasks of object identification, deblending, and classification on large multiband co-adds from the Hyper Suprime-Cam (HSC). We use existing detection/deblending codes and classification methods to train a suite of deep neural networks, including state-of-the-art transformers. Once trained, we find that transformers outperform traditional convolutional neural networks and are more robust to different contrast scalings. Transformers are able to detect and deblend objects closely matching the ground truth, achieving a median bounding box Intersection over Union of 0.99. Using high-quality class labels from the Hubble Space Telescope, we find that when classifying objects as either stars or galaxies, the best-performing networks can classify galaxies with near 100 per cent completeness and purity across the whole test sample and classify stars above 60 per cent completeness and 80 per cent purity out to HSC i-band magnitudes of 25 mag. This framework can be extended to other upcoming deep surveys such as the Legacy Survey of Space and Time and those with the Roman Space Telescope to enable fast source detection and measurement. Our code, deepdisc, is publicly available at https://github.com/grantmerz/deepdisc.

     
    more » « less
  4. ABSTRACT

    The size and complexity reached by the large sky spectroscopic surveys require efficient, accurate, and flexible automated tools for data analysis and science exploitation. We present the Galaxy Spectra Network/GaSNet-II, a supervised multinetwork deep learning tool for spectra classification and redshift prediction. GaSNet-II can be trained to identify a customized number of classes and optimize the redshift predictions. Redshift errors are determined via an ensemble/pseudo-Monte Carlo test obtained by randomizing the weights of the network-of-networks structure. As a demonstration of the capability of GaSNet-II, we use 260k Sloan Digital Sky Survey spectra from Data Release 16, separated into 13 classes including 140k galactic, and 120k extragalactic objects. GaSNet-II achieves 92.4 per cent average classification accuracy over the 13 classes and mean redshift errors of approximately 0.23 per cent for galaxies and 2.1 per cent for quasars. We further train/test the pipeline on a sample of 200k 4MOST (4-metre Multi-Object Spectroscopic Telescope) mock spectra and 21k publicly released DESI (Dark Energy Spectroscopic Instrument) spectra. On 4MOST mock data, we reach 93.4 per cent accuracy in 10-class classification and mean redshift error of 0.55 per cent for galaxies and 0.3 per cent for active galactic nuclei. On DESI data, we reach 96 per cent accuracy in (star/galaxy/quasar only) classification and mean redshift error of 2.8 per cent for galaxies and 4.8 per cent for quasars, despite the small sample size available. GaSNet-II can process ∼40k spectra in less than one minute, on a normal Desktop GPU. This makes the pipeline particularly suitable for real-time analyses and feedback loops for optimization of Stage-IV survey observations.

     
    more » « less
  5. ABSTRACT

    We compare the two largest galaxy morphology catalogues, which separate early- and late-type galaxies at intermediate redshift. The two catalogues were built by applying supervised deep learning (convolutional neural networks, CNNs) to the Dark Energy Survey data down to a magnitude limit of ∼21 mag. The methodologies used for the construction of the catalogues include differences such as the cutout sizes, the labels used for training, and the input to the CNN – monochromatic images versus gri-band normalized images. In addition, one catalogue is trained using bright galaxies observed with DES (i < 18), while the other is trained with bright galaxies (r < 17.5) and ‘emulated’ galaxies up to r-band magnitude 22.5. Despite the different approaches, the agreement between the two catalogues is excellent up to i < 19, demonstrating that CNN predictions are reliable for samples at least one magnitude fainter than the training sample limit. It also shows that morphological classifications based on monochromatic images are comparable to those based on gri-band images, at least in the bright regime. At fainter magnitudes, i > 19, the overall agreement is good (∼95 per cent), but is mostly driven by the large spiral fraction in the two catalogues. In contrast, the agreement within the elliptical population is not as good, especially at faint magnitudes. By studying the mismatched cases, we are able to identify lenticular galaxies (at least up to i < 19), which are difficult to distinguish using standard classification approaches. The synergy of both catalogues provides an unique opportunity to select a population of unusual galaxies.

     
    more » « less