Abstract Galactic science encompasses a wide range of subjects in the study of the Milky Way and Magellanic Clouds, from young stellar objects to X-ray binaries. Mapping these populations, and exploring transient phenomena within them, are among the primary science goals of the Vera C. Rubin Observatory’s Legacy Survey of Space and Time. While early versions of the survey strategy dedicated relatively few visits to the Galactic Plane region, more recent strategies under consideration envision a higher cadence within selected regions of high scientific interest. The range of galactic science presents a challenge in evaluating which strategies deliver the highest scientific returns. Here we present metrics designed to evaluate Rubin survey strategy simulations, based on the cadence of observations they deliver within regions of interest to different topics in galactic science, using variability categories defined by timescale. We also compare the fractions of exposures obtained in each filter with those recommended for the different science goals. We find that the baseline _ v2.x simulations deliver observations of the high-priority regions at sufficiently high cadence to reliably detect variability on timescales >10 days or more. Follow-up observations may be necessary to properly characterize variability, especially transients, on shorter timescales. Combining the regions of interest for all the science cases considered, we identify those areas of the Galactic Plane and Magellanic Clouds of highest priority. We recommend that these refined survey footprints be used in future simulations to explore rolling cadence scenarios, and to optimize the sequence of observations in different bandpasses.
more »
« less
Give Me a Few Hours: Exploring Short Timescales in Rubin Observatory Cadence Simulations
Abstract The limiting temporal resolution of a time-domain survey in detecting transient behavior is set by the time between observations of the same sky area. We analyze the distribution of visit separations for a range of Vera C. Rubin Observatory cadence simulations. Simulations from families v1.5–v1.7.1 are strongly peaked at the 22 minute visit pair separation and provide effectively no constraint on temporal evolution within the night. This choice will necessarily prevent Rubin from discovering a wide range of astrophysical phenomena in time to trigger rapid follow-up. We present a science-agnostic metric to supplement detailed simulations of fast-evolving transients and variables and suggest potential approaches for improving the range of timescales explored.
more »
« less
- Award ID(s):
- 1812779
- PAR ID:
- 10352718
- Date Published:
- Journal Name:
- The Astrophysical Journal Supplement Series
- Volume:
- 258
- Issue:
- 1
- ISSN:
- 0067-0049
- Page Range / eLocation ID:
- 13
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Due to their short timescale, stellar flares are a challenging target for the most modern synoptic sky surveys. The upcoming Vera C. Rubin Legacy Survey of Space and Time (LSST), a project designed to collect more data than any precursor survey, is unlikely to detect flares with more than one data point in its main survey. We developed a methodology to enable LSST studies of stellar flares, with a focus on flare temperature and temperature evolution, which remain poorly constrained compared to flare morphology. By leveraging the sensitivity expected from the Rubin system, differential chromatic refraction (DCR) can be used to constrain flare temperature from a single-epoch detection, which will enable statistical studies of flare temperatures and constrain models of the physical processes behind flare emission using the unprecedentedly high volume of data produced by Rubin over the 10 yr LSST. We model the refraction effect as a function of the atmospheric column density, photometric filter, and temperature of the flare, and show that flare temperatures at or above ∼4000 K can be constrained by a singleg-band observation at air massX≳ 1.2, given the minimum specified requirement on the single-visit relative astrometric accuracy of LSST, and that a surprisingly large number of LSST observations are in fact likely be conducted atX≳ 1.2, in spite of image quality requirements pushing the survey to preferentially lowX. Having failed to measure flare DCR in LSST precursor surveys, we make recommendations on survey design and data products that enable these studies in LSST and other future surveys.more » « less
-
Deep learning-based predictive models, leveraging Electronic Health Records (EHR), are receiving increasing attention in healthcare. An effective representation of a patient's EHR should hierarchically encompass both the temporal relationships between historical visits and medical events, and the inherent structural information within these elements. Existing patient representation methods can be roughly categorized into sequential representation and graphical representation. The sequential representation methods focus only on the temporal relationships among longitudinal visits. On the other hand, the graphical representation approaches, while adept at extracting the graph-structured relationships between various medical events, fall short in effectively integrate temporal information. To capture both types of information, we model a patient's EHR as a novel temporal heterogeneous graph. This graph includes historical visits nodes and medical events nodes. It propagates structured information from medical event nodes to visit nodes and utilizes time-aware visit nodes to capture changes in the patient's health status. Furthermore, we introduce a novel temporal graph transformer (TRANS) that integrates temporal edge features, global positional encoding, and local structural encoding into heterogeneous graph convolution, capturing both temporal and structural information. We validate the effectiveness of TRANS through extensive experiments on three real-world datasets. The results show that our proposed approach achieves state-of-the-art performance.more » « less
-
Human mobility modeling from GPS-trajectories and synthetic trajectory generation are crucial for various applications, such as urban planning, disaster management and epidemiology. Both of these tasks often require filling gaps in a partially specified sequence of visits, – a new problem that we call “controlled” synthetic trajectory generation. Existing methods for next-location prediction or synthetic trajectory generation cannot solve this problem as they lack the mechanisms needed to constrain the generated sequences of visits. Moreover, existing approaches (1) frequently treat space and time as independent factors, an assumption that fails to hold true in real-world scenarios, and (2) suffer from challenges in accuracy of temporal prediction as they fail to deal with mixed distributions and the inter-relationships of different modes with latent variables (e.g., day-of-the-week). These limitations become even more pronounced when the task involves filling gaps within sequences instead of solely predicting the next visit. We introduce TrajGPT, a transformer-based, multi-task, joint spatiotemporal generative model to address these issues. Taking inspiration from large language models, TrajGPT poses the problem of controlled trajectory generation as that of text infilling in natural language. TrajGPT integrates the spatial and temporal models in a transformer architecture through a Bayesian probability model that ensures that the gaps in a visit sequence are filled in a spatiotemporally consistent manner. Our experiments on public and private datasets demonstrate that TrajGPT not only excels in controlled synthetic visit generation but also outperforms competing models in next-location prediction tasks–Relatively, TrajGPT achieves a 26-fold improvement in temporal accuracy while retaining more than 98% of spatial accuracy on average.more » « less
-
Abstract The Vera C. Rubin Legacy Survey of Space and Time will discover thousands of microlensing events across the Milky Way, allowing for the study of populations of exoplanets, stars, and compact objects. We evaluate numerous survey strategies simulated in the Rubin Operation Simulations to assess the discovery and characterization efficiencies of microlensing events. We have implemented three metrics in the Rubin Metric Analysis Framework: a discovery metric and two characterization metrics, where one estimates how well the light curve is covered and the other quantifies how precisely event parameters can be determined. We also assess the characterizability of microlensing parallax, critical for detection of free-floating black hole lenses. We find that, given Rubin’s baseline cadence, the discovery and characterization efficiency will be higher for longer-duration and larger-parallax events. Microlensing discovery efficiency is dominated by the observing footprint, where more time spent looking at regions of high stellar density, including the Galactic bulge, Galactic plane, and Magellanic Clouds, leads to higher discovery and characterization rates. However, if the observations are stretched over too wide an area, including low-priority areas of the Galactic plane with fewer stars and higher extinction, event characterization suffers by >10%. This could impact exoplanet, binary star, and compact object events alike. We find that some rolling strategies (where Rubin focuses on a fraction of the sky in alternating years) in the Galactic bulge can lead to a 15%–20% decrease in microlensing parallax characterization, so rolling strategies should be chosen carefully to minimize losses.more » « less
An official website of the United States government

