skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: OrbitN: A Symplectic Integrator for Planetary Systems Dominated by a Central Mass—Insight into Long-term Solar System Chaos
Abstract Reliable studies of the long-term dynamics of planetary systems require numerical integrators that are accurate and fast. The challenge is often formidable because the chaotic nature of many systems requires relative numerical error bounds at or close to machine precision (∼10−16, double-precision arithmetic); otherwise, numerical chaos may dominate over physical chaos. Currently, the speed/accuracy demands are usually only met by symplectic integrators. For example, the most up-to-date long-term astronomical solutions for the solar system in the past (widely used in, e.g., astrochronology and high-precision geological dating) have been obtained using symplectic integrators. However, the source codes of these integrators are unavailable. Here I present the symplectic integratororbitN(lean version 1.0) with the primary goal of generating accurate and reproducible long-term orbital solutions for near-Keplerian planetary systems (here the solar system) with a dominant massM0. Among other features,orbitN-1.0includesM0’s quadrupole moment, a lunar contribution, and post-Newtonian corrections (1PN) due toM0(fast symplectic implementation). To reduce numerical round-off errors, Kahan compensated summation was implemented. I useorbitNto provide insight into the effect of various processes on the long-term chaos in the solar system. Notably, 1PN corrections have the opposite effect on chaoticity/stability on a 100 Myr versus Gyr timescale. For the current application,orbitNis about as fast as or faster (factor 1.15–2.6) than comparable integrators, depending on hardware.11The orbitN source code (C) is available athttp://github.com/rezeebe/orbitN.  more » « less
Award ID(s):
2001022
PAR ID:
10420289
Author(s) / Creator(s):
Publisher / Repository:
DOI PREFIX: 10.3847
Date Published:
Journal Name:
The Astronomical Journal
Volume:
166
Issue:
1
ISSN:
0004-6256
Format(s):
Medium: X Size: Article No. 1
Size(s):
Article No. 1
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Ground-based high-resolution cross-correlation spectroscopy (HRCCS;R ≳ 15,000) is a powerful complement to space-based studies of exoplanet atmospheres. By resolving individual spectral lines, HRCCS can precisely measure chemical abundance ratios, directly constrain atmospheric dynamics, and robustly probe multidimensional physics. But the subtleties of HRCCS data sets—e.g., the lack of exoplanetary spectra visible by eye and the statistically complex process of telluric removal—can make interpreting them difficult. In this work, we seek to clarify the uncertainty budget of HRCCS with a forward-modeling approach. We present an HRCCS observation simulator,scope,55https://github.com/arjunsavel/scopethat incorporates spectral contributions from the exoplanet, star, tellurics, and instrument. This tool allows us to control the underlying data set, enabling controlled experimentation with complex HRCCS methods. Simulating a fiducial hot Jupiter data set (WASP-77Ab emission with IGRINS), we first confirm via multiple tests that the commonly used principal component analysis does not bias the planetary signal when few components are used. Furthermore, we demonstrate that mildly varying tellurics and moderate wavelength solution errors induce only mild decreases in HRCCS detection significance. However, limiting-case, strongly varying tellurics can bias the retrieved velocities and gas abundances. Additionally, in the low signal-to-noise ratio limit, constraints on gas abundances become highly non-Gaussian. Our investigation of the uncertainties and potential biases inherent in HRCCS data analysis enables greater confidence in scientific results from this maturing method. 
    more » « less
  2. Abstract High-fidelity simulators that connect theoretical models with observations are indispensable tools in many sciences. If the likelihood is known, inference can proceed using standard techniques. However, when the likelihood is intractable or unknown, a simulator makes it possible to infer the parameters of a theoretical model directly from real and simulated observations when coupled with machine learning. We introduce an extension of the recently proposed likelihood-free frequentist inference (LF2I) approach that makes it possible to construct confidence sets with thep-value function and to use the same function to check the coverage explicitly at any given parameter point. LikeLF2I, this extension yields provably valid confidence sets in parameter inference problems for which a high-fidelity simulator is available. The utility of our algorithm is illustrated by applying it to three pedagogically interesting examples: the first is from cosmology, the second from high-energy physics and astronomy, both with tractable likelihoods, while the third, with an intractable likelihood, is from epidemiology33Code to reproduce all of our results is available onhttps://github.com/AliAlkadhim/ALFFI.. 
    more » « less
  3. Abstract Key science questions, such as galaxy distance estimation and weather forecasting, often require knowing the full predictive distribution of a target variableYgiven complex inputsX. Despite recent advances in machine learning and physics-based models, it remains challenging to assess whether an initial model is calibrated for allx, and when needed, to reshape the densities ofytoward ‘instance-wise’ calibration. This paper introduces the local amortized diagnostics and reshaping of conditional densities (LADaR) framework and proposes a new computationally efficient algorithm (Cal-PIT) that produces interpretable local diagnostics and provides a mechanism for adjusting conditional density estimates (CDEs).Cal-PITlearns a single interpretable local probability–probability map from calibration data that identifies where and how the initial model is miscalibrated across feature space, which can be used to morph CDEs such that they are well-calibrated. We illustrate the LADaR framework on synthetic examples, including probabilistic forecasting from image sequences, akin to predicting storm wind speed from satellite imagery. Our main science application involves estimating the probability density functions of galaxy distances given photometric data, whereCal-PITachieves better instance-wise calibration than all 11 other literature methods in a benchmark data challenge, demonstrating its utility for next-generation cosmological analyzes99Code available as a Python package here:https://github.com/lee-group-cmu/Cal-PIT.. 
    more » « less
  4. Abstract In this work, we present classification results on early supernova light curves from SCONE, a photometric classifier that uses convolutional neural networks to categorize supernovae (SNe) by type using light-curve data. SCONE is able to identify SN types from light curves at any stage, from the night of initial alert to the end of their lifetimes. Simulated LSST SNe light curves were truncated at 0, 5, 15, 25, and 50 days after the trigger date and used to train Gaussian processes in wavelength and time space to produce wavelength–time heatmaps. SCONE uses these heatmaps to perform six-way classification between SN types Ia, II, Ibc, Ia-91bg, Iax, and SLSN-I. SCONE is able to perform classification with or without redshift, but we show that incorporating redshift information improves performance at each epoch. SCONE achieved 75% overall accuracy at the date of trigger (60% without redshift), and 89% accuracy 50 days after trigger (82% without redshift). SCONE was also tested on bright subsets of SNe (r< 20 mag) and produced 91% accuracy at the date of trigger (83% without redshift) and 95% five days after trigger (94.7% without redshift). SCONE is the first application of convolutional neural networks to the early-time photometric transient classification problem. All of the data processing and model code developed for this paper can be found in the SCONE software package11github.com/helenqu/sconelocated at github.com/helenqu/scone (Qu 2021). 
    more » « less
  5. Abstract Precise and accurate predictions of the halo mass function for cluster mass scales inwνCDM cosmologies are crucial for extracting robust and unbiased cosmological information from upcoming galaxy cluster surveys.Here, we present a halo mass function emulator for cluster mass scales (≳ 1013M/h) up to redshiftz= 2 with comprehensive support for the parameter space ofwνCDM cosmologies allowed by current data.Based on theAemulusνsuite of simulations, the emulator marks a significant improvement in the precision of halo mass function predictions by incorporating both massive neutrinos and non-standard dark energy equation of state models.This allows for accurate modeling of the cosmology dependence in large-scale structure and galaxy cluster studies.We show that the emulator, designed using Gaussian Process Regression, has negligible theoretical uncertainties compared to dominant sources of error in future cluster abundance studies.Our emulator is publicly available (https://github.com/DelonShen/aemulusnu_hmf), providing the community with a crucial tool for upcoming cosmological surveys such as LSST and Euclid. 
    more » « less