Abstract We present here the design, architecture, and first data release for the Solar System Notification Alert Processing System (SNAPS). SNAPS is a solar system broker that ingests alert data from all-sky surveys. At present, we ingest data from the Zwicky Transient Facility (ZTF) public survey, and we will ingest data from the forthcoming Legacy Survey of Space and Time (LSST) when it comes online. SNAPS is an official LSST downstream broker. In this paper we present the SNAPS design goals and requirements. We describe the details of our automatic pipeline processing in which the physical properties of asteroids are derived. We present SNAPShot1, our first data release, which contains 5,458,459 observations of 31,693 asteroids observed by ZTF from 2018 July to 2020 May. By comparing a number of derived properties for this ensemble to previously published results for overlapping objects we show that our automatic processing is highly reliable. We present a short list of science results, among many that will be enabled by our SNAPS catalog: (1) we demonstrate that there are no known asteroids with very short periods and high amplitudes, which clearly indicates that in general asteroids in the size range 0.3–20 km are strengthless; (2) we find no difference in the period distributions of Jupiter Trojan asteroids, implying that the L4 and L5 clouds have different shape distributions; and (3) we highlight several individual asteroids of interest. Finally, we describe future work for SNAPS and our ability to operate at LSST scale.
more »
« less
The Solar System Notification Alert Processing System (SNAPS): Asteroid Population Outlier Detection
Abstract The Solar system Notification Alert Processing System (snaps) is a Zwicky Transient Facility (ZTF) and Rubin Observatory alert broker that will send alerts to the community regarding interesting events in the solar system.snapsis actively monitoring solar system objects and one of its functions is to compare objects (primarily main belt asteroids) to one another to find those that are outliers relative to the population. In this paper, we use theSNAPShot1data set, which contains 31,693 objects from ZTF, and derive outlier scores for each of these objects.snapsemploys an unsupervised approach; consequently, to derive outlier rankings for each object, we propose four different outlier metrics such that we can explore variants of the outlier scores and add confidence to the outlier rankings. We also provide outlier scores for each object in each permutation of 15 feature spaces, between two and 15 features, which yields 32,752 total feature spaces. We show that we can derive population outlier rankings each month at Rubin Observatory scale using four Nvidia A100 GPUs, and present several avenues of scientific investigation that can be explored using population outlier detection.
more »
« less
- PAR ID:
- 10521169
- Publisher / Repository:
- DOI PREFIX: 10.3847
- Date Published:
- Journal Name:
- The Astronomical Journal
- Volume:
- 168
- Issue:
- 2
- ISSN:
- 0004-6256
- Format(s):
- Medium: X Size: Article No. 56
- Size(s):
- Article No. 56
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Optical surveys have become increasingly adept at identifying candidate tidal disruption events (TDEs) in large numbers, but classifying these generally requires extensive spectroscopic resources. Here we presenttdescore, a simple binary photometric classifier that is trained using a systematic census of ∼3000 nuclear transients from the Zwicky Transient Facility (ZTF). The sample is highly imbalanced, with TDEs representing ∼2% of the total.tdescoreis nonetheless able to reject non-TDEs with 99.6% accuracy, yielding a sample of probable TDEs with recall of 77.5% for a precision of 80.2%.tdescoreis thus substantially better than any available TDE photometric classifier scheme in the literature, with performance not far from spectroscopy as a method for classifying ZTF nuclear transients, despite relying solely on ZTF data and multiwavelength catalog cross matching. In a novel extension, we use “Shapley additive explanations” to provide a human-readable justification for each individualtdescoreclassification, enabling users to understand and form opinions about the underlying classifier reasoning.tdescorecan serve as a model for photometric identification of TDEs with time-domain surveys, such as the upcoming Rubin observatory.more » « less
-
Abstract The Vera C. Rubin Observatory is expected to start the Legacy Survey of Space and Time (LSST) in early to mid-2025. This multiband wide-field synoptic survey will transform our view of the solar system, with the discovery and monitoring of over five million small bodies. The final survey strategy chosen for LSST has direct implications on the discoverability and characterization of solar system minor planets and passing interstellar objects. Creating an inventory of the solar system is one of the four main LSST science drivers. The LSST observing cadence is a complex optimization problem that must balance the priorities and needs of all the key LSST science areas. To design the best LSST survey strategy, a series of operation simulations using the Rubin Observatory scheduler have been generated to explore the various options for tuning observing parameters and prioritizations. We explore the impact of the various simulated LSST observing strategies on studying the solar system’s small body reservoirs. We examine what are the best observing scenarios and review what are the important considerations for maximizing LSST solar system science. In general, most of the LSST cadence simulations produce ±5% or less variations in our chosen key metrics, but a subset of the simulations significantly hinder science returns with much larger losses in the discovery and light-curve metrics.more » « less
-
While optical surveys regularly discover slow transients like supernovae on their own, the most common way to discover extragalactic fast transients, fading away in a few nights, is via follow-up observations of gamma-ray burst and gravitational-wave triggers. However, wide-field surveys have the potential to also identify rapidly fading transients independently of such external triggers. The volumetric survey speed of the Zwicky Transient Facility (ZTF) makes it sensitive to faint and fast-fading objects as kilonovae, the optical counterparts to binary neutron stars and neutron star-black hole mergers, out to almost 200Mpc. We introduce an open-source software infrastructure, the ZTF REaltime Search and Triggering, ZTFReST, designed to identify kilonovae and fast optical transients in ZTF data. Using the ZTF alert stream combined with forced photometry, we have implemented automated candidate ranking based on their photometric evolution and fitting to kilonova models. Automated triggering of follow-up systems, such as Las Cumbres Observatory, has also been implemented. In 13 months of science validation, we found several extragalactic fast transients independent of any external trigger (though some counterparts were identified later), including at least one supernova with post-shock cooling emission, two known afterglows with an associated gamma-ray burst, two known afterglows without any known gamma-ray counterpart, and three new fast-declining sources (ZTF20abtxwfx, ZTF20acozryr, and ZTF21aagwbjr) that are likely associated with GRB200817A, GRB201103B, and GRB210204A. However, we have not found any objects which appear to be kilonovae; therefore, we constrain the rate of GW170817-like kilonovae to R<900Gpc−3yr−1. A framework such as ZTFReST could become a prime tool for kilonova and fast transient discovery with the Vera C. Rubin Observatory.more » « less
-
Rankings and scores are two common data types used by judges to express preferences and/or perceptions of quality in a collection of objects. Numerous models exist to study data of each type separately, but no unified statistical model captures both data types simultaneously without first performing data conversion. We propose the Mallows-Binomial model to close this gap, which combines a Mallows $$\phi$$ ranking model with Binomial score models through shared parameters that quantify object quality, a consensus ranking, and the level of consensus among judges. We propose an efficient tree-search algorithm to calculate the exact MLE of model parameters, study statistical properties of the model both analytically and through simulation, and apply our model to real data from an instance of grant panel review that collected both scores and partial rankings. Furthermore, we demonstrate how model outputs can be used to rank objects with confidence. The proposed model is shown to sensibly combine information from both scores and rankings to quantify object quality and measure consensus with appropriate levels of statistical uncertainty.more » « less
An official website of the United States government
