Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract The Mass Spectrometer and Incoherent Scatter radar (MSIS) model family has been developed and improved since the early 1970's. The most recent version of MSIS is the Naval Research Laboratory (NRL) MSIS 2.0 empirical atmospheric model. NRLMSIS 2.0 provides species density, mass density, and temperature estimates as function of location and space weather conditions. MSIS models have long been a popular choice of thermosphere model in the research and operations community alike, but—like many models—does not provide uncertainty estimates. In this work, we develop an exospheric temperature model based in machine learning that can be used with NRLMSIS 2.0 to calibrate it relative to high‐fidelity satellite density estimates directly through the exospheric temperature parameter. Instead of providing point estimates, our model (called MSIS‐UQ) outputs a distribution which is assessed using a metric called the calibration error score. We show that MSIS‐UQ debiases NRLMSIS 2.0 resulting in reduced differences between model and satellite density of 25% and is 11% closer to satellite density than the Space Force's High Accuracy Satellite Drag Model. We also show the model's uncertainty estimation capabilities by generating altitude profiles for species density, mass density, and temperature. This explicitly demonstrates how exospheric temperature probabilities affect density and temperature profiles within NRLMSIS 2.0. Another study displays improved post‐storm overcooling capabilities relative to NRLMSIS 2.0 alone, enhancing the phenomena that it can capture.
-
Abstract Machine learning (ML) models are universal function approximators and—if used correctly—can summarize the information content of observational data sets in a functional form for scientific and engineering applications. A benefit to ML over parametric models is that there are no a priori assumptions about particular basis functions which can potentially limit the phenomena that can be modeled. In this work, we develop ML models on three data sets: the Space Environment Technologies High Accuracy Satellite Drag Model (HASDM) density database, a spatiotemporally matched data set of outputs from the Jacchia‐Bowman 2008 Empirical Thermospheric Density Model (JB2008), and an accelerometer‐derived density data set from CHAllenging Minisatellite Payload (CHAMP). These ML models are compared to the Naval Research Laboratory Mass Spectrometer and Incoherent Scatter radar (NRLMSIS 2.0) model to study the presence of post‐storm cooling in the middle‐thermosphere. We find that both NRLMSIS 2.0 and JB2008‐ML do not account for post‐storm cooling and consequently perform poorly in periods following strong geomagnetic storms (e.g., the 2003 Halloween storms). Conversely, HASDM‐ML and CHAMP‐ML do show evidence of post‐storm cooling indicating that this phenomenon is present in the original data sets. Results show that density reductions up to 40% can occur 1–3 days post‐storm depending on the location and strength of the storm.
-
Abstract The EXospheric TEMperatures on a PoLyhedrAl gRid (EXTEMPLAR) method predicts the neutral densities in the thermosphere. The performance of this model has been evaluated through a comparison with the Air Force High Accuracy Satellite Drag Model (HASDM). The Space Environment Technologies (SET) HASDM database that was used for this test spans the 20 years 2000 through 2019, containing densities at 3 hr time intervals at 25 km altitude steps, and a spatial resolution of 10° latitude by 15° longitude. The upgraded EXTEMPLAR that was tested uses the newer Naval Research Laboratory MSIS 2.0 model to convert global exospheric temperature values to neutral density as a function of altitude. The revision also incorporated time delays that varied as a function of location, between the total Poynting flux in the polar regions and the exospheric temperature response. The density values from both models were integrated on spherical shells at altitudes ranging from 200 to 800 km. These sums were compared as a function of time. The results show an excellent agreement at temporal scales ranging from hours to years. The EXTEMPLAR model performs best at altitudes of 400 km and above, where geomagnetic storms produce the largest relative changes in neutral density. In addition to providing an effective method to compare models that have very different spatial resolutions, the use of density totals at various altitudes presents a useful illustration of how the thermosphere behaves at different altitudes, on time scales ranging from hours to complete solar cycles.
-
Abstract We present a new high‐resolution empirical model for the ionospheric total electron content (TEC). TEC data are obtained from the global navigation satellite system (GNSS) receivers with a 1° × 1° spatial resolution and 5‐min temporal resolution. The linear regression model is developed at 45°N, 0°E for the years 2000–2019 with 30‐min temporal resolution, unprecedented for typical empirical ionospheric models. The model describes dependency of TEC on solar flux, season, geomagnetic activity, and local time. Parameters describing solar and geomagnetic activity are evaluated. In particular, several options for solar flux input to the model are compared, including the 10.7 cm solar radio flux (
F 10.7), the Mg II core‐to‐wing ratio, and formulations of the solar extreme ultraviolet flux (EUV). Ultimately, the extreme ultraviolet flux presented by the Flare Irradiance Spectral Model, integrated from 0.05 to 105.05 nm, best represents the solar flux input to the model. TEC time delays to this solar parameter on the order of several days as well as seasonal modulation of the solar flux terms are included. TheAp 3index and its history are used to reflect the influence of geomagnetic activity. The root mean squared error of the model (relative to the mean TEC observed in the 30‐min window) is 1.9539 TECu. A validation of this model for the first 3 months of 2020 shows excellent agreement with data. The new model shows significant improvement over the International Reference Ionosphere 2016 (IRI‐2016) when the two are compared during 2008 and 2012. -
Abstract The Community Coordinated Modeling Center has been leading community‐wide space science and space weather model validation projects for many years. These efforts have been broadened and extended via the newly launched International Forum for Space Weather Modeling Capabilities Assessment (
https://ccmc.gsfc.nasa.gov/assessment/ ). Its objective is to track space weather models' progress and performance over time, a capability that is critically needed in space weather operations and different user communities in general. The Space Radiation and Plasma Effects Working Team of the aforementioned International Forum works on one of the many focused evaluation topics and deals with five different subtopics (https://ccmc.gsfc.nasa.gov/assessment/topics/radiation‐all.php ) and varieties of particle populations: Surface Charging from tens of eV to 50‐keV electrons and internal charging due to energetic electrons from hundreds keV to several MeVs. Single‐event effects from solar energetic particles and galactic cosmic rays (several MeV to TeV), total dose due to accumulation of doses from electrons (>100 keV) and protons (>1 MeV) in a broad energy range, and radiation effects from solar energetic particles and galactic cosmic rays at aviation altitudes. A unique aspect of the Space Radiation and Plasma Effects focus area is that it bridges the space environments, engineering, and user communities. The intent of the paper is to provide an overview of the current status and to suggest a guide for how to best validate space environment models for operational/engineering use, which includes selection of essential space environment and effect quantities and appropriate metrics. -
Abstract Geomagnetic indices are convenient quantities that distill the complicated physics of some region or aspect of near‐Earth space into a single parameter. Most of the best‐known indices are calculated from ground‐based magnetometer data sets, such as Dst, SYM‐H, Kp, AE, AL, and PC. Many models have been created that predict the values of these indices, often using solar wind measurements upstream from Earth as the input variables to the calculation. This document reviews the current state of models that predict geomagnetic indices and the methods used to assess their ability to reproduce the target index time series. These existing methods are synthesized into a baseline collection of metrics for benchmarking a new or updated geomagnetic index prediction model. These methods fall into two categories: (1) fit performance metrics such as root‐mean‐square error and mean absolute error that are applied to a time series comparison of model output and observations and (2) event detection performance metrics such as Heidke Skill Score and probability of detection that are derived from a contingency table that compares model and observation values exceeding (or not) a threshold value. A few examples of codes being used with this set of metrics are presented, and other aspects of metrics assessment best practices, limitations, and uncertainties are discussed, including several caveats to consider when using geomagnetic indices.