Abstract Machine learning (ML) has been applied to space weather problems with increasing frequency in recent years, driven by an influx of in-situ measurements and a desire to improve modeling and forecasting capabilities throughout the field. Space weather originates from solar perturbations and is comprised of the resulting complex variations they cause within the numerous systems between the Sun and Earth. These systems are often tightly coupled and not well understood. This creates a need for skillful models with knowledge about the confidence of their predictions. One example of such a dynamical system highly impacted by space weather is the thermosphere, the neutral region of Earth’s upper atmosphere. Our inability to forecast it has severe repercussions in the context of satellite drag and computation of probability of collision between two space objects in low Earth orbit (LEO) for decision making in space operations. Even with (assumed) perfect forecast of model drivers, our incomplete knowledge of the system results in often inaccurate thermospheric neutral mass density predictions. Continuing efforts are being made to improve model accuracy, but density models rarely provide estimates of confidence in predictions. In this work, we propose two techniques to develop nonlinear ML regression models to predict thermospheric density while providing robust and reliable uncertainty estimates: Monte Carlo (MC) dropout and direct prediction of the probability distribution, both using the negative logarithm of predictive density (NLPD) loss function. We show the performance capabilities for models trained on both local and global datasets. We show that the NLPD loss provides similar results for both techniques but the direct probability distribution prediction method has a much lower computational cost. For the global model regressed on the Space Environment Technologies High Accuracy Satellite Drag Model (HASDM) density database, we achieve errors of approximately 11% on independent test data with well-calibrated uncertainty estimates. Using an in-situ CHAllenging Minisatellite Payload (CHAMP) density dataset, models developed using both techniques provide test error on the order of 13%. The CHAMP models—on validation and test data—are within 2% of perfect calibration for the twenty prediction intervals tested. We show that this model can also be used to obtain global density predictions with uncertainties at a given epoch.
more »
« less
The Simplified Approach to the Bose Gas Without Translation Invariance
Abstract The simplified approach to the Bose gas was introduced by Lieb in 1963 to study the ground state of systems of interacting Bosons. In a series of recent papers, it has been shown that the simplified approach exceeds earlier expectations, and gives asymptotically accurate predictions at both low and high density. In the intermediate density regime, the qualitative predictions of the simplified approach have also been found to agree very well with quantum Monte Carlo computations. Until now, the simplified approach had only been formulated for translation invariant systems, thus excluding external potentials, and non-periodic boundary conditions. In this paper, we extend the formulation of the simplified approach to a wide class of systems without translation invariance. This also allows us to study observables in translation invariant systems whose computation requires the symmetry to be broken. Such an observable is the momentum distribution, which counts the number of particles in excited states of the Laplacian. In this paper, we show how to compute the momentum distribution in the simplified approach, and show that, for the simple equation, our prediction matches up with Bogolyubov’s prediction at low densities, for momenta extending up to the inverse healing length.
more »
« less
- Award ID(s):
- 2349077
- PAR ID:
- 10596569
- Publisher / Repository:
- Springer
- Date Published:
- Journal Name:
- Journal of Statistical Physics
- Volume:
- 191
- Issue:
- 7
- ISSN:
- 1572-9613
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Low voltage microgrid systems are characterized by high sensitivity to both active and reactive power for voltage support. Also, the operational conditions of microgrids connected to active distribution systems are time-varying. Thus, the ideal controller to provide voltage support must be flexible enough to handle technical and operational constraints. This paper proposes a model predictive control (MPC) approach to provide dynamic voltage support using energy storage systems. This approach uses a simplified predictive model of the system along with operational constraints to solve an online finite-horizon optimization problem. Control signals are then computed such that the defined cost function is minimized. By proper selection of MPC weighting parameters, the quality of service provided can be adjusted to achieve the desired performance. A simulation study in Matlab/Simulink validates the proposed approach for a simplified version of a 100 kVA, 208 V microgrid using typical parameters. Results show that performance of the voltage support can be adjusted depending on the choice of weight and constraints of the controller.more » « less
-
null (Ed.)Abstract This paper presents a search for direct top squark pair production in events with missing transverse momentum plus either a pair of jets consistent with Standard Model Higgs boson decay into b -quarks or a same-flavour opposite-sign dilepton pair with an invariant mass consistent with a Z boson. The analysis is performed using the proton–proton collision data at $$\sqrt{s}=13$$ s = 13 TeV collected with the ATLAS detector during the LHC Run-2, corresponding to an integrated luminosity of 139 fb $$^{-1}$$ - 1 . No excess is observed in the data above the Standard Model predictions. The results are interpreted in simplified models featuring direct production of pairs of either the lighter top squark ( $$\tilde{t}_1$$ t ~ 1 ) or the heavier top squark ( $$\tilde{t}_2$$ t ~ 2 ), excluding at 95% confidence level $$\tilde{t}_1$$ t ~ 1 and $$\tilde{t}_2$$ t ~ 2 masses up to about 1220 and 875 GeV, respectively.more » « less
-
Abstract The dynamics of marine systems at decadal scales are notoriously hard to predict—hence references to this timescale as the “grey zone” for ocean prediction. Nevertheless, decadal-scale prediction is a rapidly developing field with an increasing number of applications to help guide ocean stewardship and sustainable use of marine environments. Such predictions can provide industry and managers with information more suited to support planning and management over strategic timeframes, as compared to seasonal forecasts or long-term (century-scale) predictions. The most significant advances in capability for decadal-scale prediction over recent years have been for ocean physics and biogeochemistry, with some notable advances in ecological prediction skill. In this paper, we argue that the process of “lighting the grey zone” by providing improved predictions at decadal scales should also focus on including human dimensions in prediction systems to better meet the needs and priorities of end users. Our paper reviews information needs for decision-making at decadal scales and assesses current capabilities for meeting these needs. We identify key gaps in current capabilities, including the particular challenge of integrating human elements into decadal prediction systems. We then suggest approaches for overcoming these challenges and gaps, highlighting the important role of co-production of tools and scenarios, to build trust and ensure uptake with end users of decadal prediction systems. We also highlight opportunities for combining narratives and quantitative predictions to better incorporate the human dimension in future efforts to light the grey zone of decadal-scale prediction.more » « less
-
Translation to or from low-resource languages (LRLs) poses challenges for machine translation in terms of both adequacy and fluency. Data augmentation utilizing large amounts of monolingual data is regarded as an effective way to alleviate these problems. In this paper, we propose a general framework for data augmentation in low-resource machine translation that not only uses target-side monolingual data, but also pivots through a related highresource language (HRL). Specifically, we experiment with a two-step pivoting method to convert high-resource data to the LRL, making use of available resources to better approximate the true data distribution of the LRL. First, we inject LRL words into HRL sentences through an induced bilingual dictionary. Second, we further edit these modified sentences using a modified unsupervised machine translation framework. Extensive experiments on four low-resource datasets show that under extreme low-resource settings, our data augmentation techniques improve translation quality by up to 1.5 to 8 BLEU points compared to supervised back-translation baselines.more » « less
An official website of the United States government

