skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Toward the Automated Detection of Light Echoes in Synoptic Surveys: Considerations on the Application of Deep Convolutional Neural Networks
Abstract Light echoes (LEs) are the reflections of astrophysical transients off of interstellar dust. They are fascinating astronomical phenomena that enable studies of the scattering dust as well as of the original transients. LEs, however, are rare and extremely difficult to detect as they appear as faint, diffuse, time-evolving features. The detection of LEs still largely relies on human inspection of images, a method unfeasible in the era of large synoptic surveys. The Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST) will generate an unprecedented amount of astronomical imaging data at high spatial resolution, exquisite image quality, and over tens of thousands of square degrees of sky: an ideal survey for LEs. However, the Rubin data processing pipelines are optimized for the detection of point sources and will entirely miss LEs. Over the past several years, artificial intelligence (AI) object-detection frameworks have achieved and surpassed real-time, human-level performance. In this work, we leverage a data set from the Asteroid Terrestrial-impact Last Alert System telescope to test a popular AI object-detection framework, You Only Look Once, or YOLO, developed by the computer-vision community, to demonstrate the potential of AI for the detection of LEs in astronomical images. We find that an AI framework can reach human-level performance even with a size- and quality-limited data set. We explore and highlight challenges, including class imbalance and label incompleteness, and road map the work required to build an end-to-end pipeline for the automated detection and study of LEs in high-throughput astronomical surveys.  more » « less
Award ID(s):
2108841
PAR ID:
10380289
Author(s) / Creator(s):
; ; ; ; ; ; ; ; ;
Publisher / Repository:
DOI PREFIX: 10.3847
Date Published:
Journal Name:
The Astronomical Journal
Volume:
164
Issue:
6
ISSN:
0004-6256
Format(s):
Medium: X Size: Article No. 250
Size(s):
Article No. 250
Sponsoring Org:
National Science Foundation
More Like this
  1. ABSTRACT The next generation of wide-field deep astronomical surveys will deliver unprecedented amounts of images through the 2020s and beyond. As both the sensitivity and depth of observations increase, more blended sources will be detected. This reality can lead to measurement biases that contaminate key astronomical inferences. We implement new deep learning models available through Facebook AI Research’s detectron2 repository to perform the simultaneous tasks of object identification, deblending, and classification on large multiband co-adds from the Hyper Suprime-Cam (HSC). We use existing detection/deblending codes and classification methods to train a suite of deep neural networks, including state-of-the-art transformers. Once trained, we find that transformers outperform traditional convolutional neural networks and are more robust to different contrast scalings. Transformers are able to detect and deblend objects closely matching the ground truth, achieving a median bounding box Intersection over Union of 0.99. Using high-quality class labels from the Hubble Space Telescope, we find that when classifying objects as either stars or galaxies, the best-performing networks can classify galaxies with near 100 per cent completeness and purity across the whole test sample and classify stars above 60 per cent completeness and 80 per cent purity out to HSC i-band magnitudes of 25 mag. This framework can be extended to other upcoming deep surveys such as the Legacy Survey of Space and Time and those with the Roman Space Telescope to enable fast source detection and measurement. Our code, deepdisc, is publicly available at https://github.com/grantmerz/deepdisc. 
    more » « less
  2. Abstract Next-generation surveys like the Legacy Survey of Space and Time (LSST) on the Vera C. Rubin Observatory (Rubin) will generate orders of magnitude more discoveries of transients and variable stars than previous surveys. To prepare for this data deluge, we developed the Photometric LSST Astronomical Time-series Classification Challenge (PLAsTiCC), a competition that aimed to catalyze the development of robust classifiers under LSST-like conditions of a nonrepresentative training set for a large photometric test set of imbalanced classes. Over 1000 teams participated in PLAsTiCC, which was hosted in the Kaggle data science competition platform between 2018 September 28 and 2018 December 17, ultimately identifying three winners in 2019 February. Participants produced classifiers employing a diverse set of machine-learning techniques including hybrid combinations and ensemble averages of a range of approaches, among them boosted decision trees, neural networks, and multilayer perceptrons. The strong performance of the top three classifiers on Type Ia supernovae and kilonovae represent a major improvement over the current state of the art within astronomy. This paper summarizes the most promising methods and evaluates their results in detail, highlighting future directions both for classifier development and simulation needs for a next-generation PLAsTiCC data set. 
    more » « less
  3. Photometric redshifts will be a key data product for the Rubin Observatory Legacy Survey of Space and Time (LSST) as well as for future ground and space-based surveys. The need for photometric redshifts, or photo-zs, arises from sparse spectroscopic coverage of observed galaxies. LSST is expected to observe billions of objects, making it crucial to have a photo-z estimator that is accurate and efficient. To that end, we present DeepDISC photo-z, a photo-z estimator that is an extension of the DeepDISC framework. The base DeepDISC network simultaneously detects, segments, and classifies objects in multi-band coadded images. We introduce photo-z capabilities to DeepDISC by adding a redshift estimation Region of Interest head, which produces a photo-z probability distribution function for each detected object. On simulated LSST images, DeepDISC photo-z outperforms traditional catalog-based estimators, in both point estimate and probabilistic metrics. We validate DeepDISC by examining dependencies on systematics including galactic extinction, blending and PSF effects. We also examine the impact of the data quality and the size of the training set and model. We find that the biggest factor in DeepDISC photo-z quality is the signal-to-noise of the imaging data, and see a reduction in photo-z scatter approximately proportional to the image data signal-to-noise. Our code is fully public and integrated in the RAIL photo-z package for ease of use and comparison to other codes at https://github.com/LSSTDESC/rail_deepdisc 
    more » « less
  4. null (Ed.)
    The Legacy Survey of Space and Time, operated by the Vera C. Rubin Observatory, is a 10-year astronomical survey due to start operations in 2022 that will image half the sky every three nights. LSST will produce ~20TB of raw data per night which will be calibrated and analyzed in almost real-time. Given the volume of LSST data, the traditional subset-download-process paradigm of data reprocessing faces significant challenges. We describe here, the first steps towards a gateway for astronomical science that would enable astronomers to analyze images and catalogs at scale. In this first step, we focus on executing the Rubin LSST Science Pipelines, a collection of image and catalog processing algorithms, on Amazon Web Services (AWS). We describe our initial impressions of the performance, scalability, and cost of deploying such a system in the cloud. 
    more » « less
  5. ABSTRACT Fulfilling the rich promise of rapid advances in time-domain astronomy is only possible through confronting our observations with physical models and extracting the parameters that best describe what we see. Here, we introduce redback; a Bayesian inference software package for electromagnetic transients. redback provides an object-orientated python interface to over 12 different samplers and over 100 different models for kilonovae, supernovae, gamma-ray burst afterglows, tidal disruption events, engine-driven transients among other explosive transients. The models range in complexity from simple analytical and semi-analytical models to surrogates built upon numerical simulations accelerated via machine learning. redback also provides a simple interface for downloading and processing data from various catalogues such as Swift and FINK. The software can also serve as an engine to simulate transients for telescopes such as the Zwicky Transient Facility and Vera Rubin with realistic cadences, limiting magnitudes, and sky coverage or a hypothetical user-constructed survey or a generic transient for target-of-opportunity observations with different telescopes. As a demonstration of its capabilities, we show how redback can be used to jointly fit the spectrum and photometry of a kilonova, enabling a more powerful, holistic probe into the properties of a transient. We also showcase general examples of how redback can be used as a tool to simulate transients for realistic surveys, fit models to real, simulated, or private data, multimessenger inference with gravitational waves, and serve as an end-to-end software toolkit for parameter estimation and interpreting the nature of electromagnetic transients. 
    more » « less