skip to main content


This content will become publicly available on January 1, 2025

Title: Detection and Classification of Sporadic E Using Convolutional Neural Networks
Abstract

In this work, convolutional neural networks (CNN) are developed to detect and characterize sporadic E (Es), demonstrating an improvement over current methods. This includes a binary classification model to determine ifEsis present, followed by a regression model to estimate theEsordinary mode critical frequency (foEs), a proxy for the intensity, along with the height at which theEslayer occurs (hEs). Signal‐to‐noise ratio (SNR) and excess phase profiles from six Global Navigation Satellite System (GNSS) radio occultation (RO) missions during the years 2008–2022 are used as the inputs of the model. Intensity (foEs) and the height (hEs) values are obtained from the global network of ground‐based Digisonde ionosondes and are used as the “ground truth,” or target variables, during training. After corresponding the two data sets, a total of 36,521 samples are available for training and testing the models. The foEs CNN binary classification model achieved an accuracy of 74% and F1‐score of 0.70. Mean absolute errors (MAE) of 0.63 MHz and 5.81 km along with root‐mean squared errors (RMSE) of 0.95 MHz and 7.89 km were attained for estimating foEs and hEs, respectively, when it was known thatEswas present. When combining the classification and regression models together for use in practical applications where it is unknown ifEsis present, an foEs MAE and RMSE of 0.97 and 1.65 MHz, respectively, were realized. We implemented three other techniques for sporadic E characterization, and found that the CNN model appears to perform better.

 
more » « less
Award ID(s):
2139916 2221765
NSF-PAR ID:
10488841
Author(s) / Creator(s):
; ;
Publisher / Repository:
Wiley
Date Published:
Journal Name:
Space Weather
Volume:
22
Issue:
1
ISSN:
1542-7390
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Le, Khanh N.Q. (Ed.)
    In current clinical settings, typically pain is measured by a patient’s self-reported information. This subjective pain assessment results in suboptimal treatment plans, over-prescription of opioids, and drug-seeking behavior among patients. In the present study, we explored automatic objective pain intensity estimation machine learning models using inputs from physiological sensors. This study uses BioVid Heat Pain Dataset. We extracted features from Electrodermal Activity (EDA), Electrocardiogram (ECG), Electromyogram (EMG) signals collected from study participants subjected to heat pain. We built different machine learning models, including Linear Regression, Support Vector Regression (SVR), Neural Networks and Extreme Gradient Boosting for continuous value pain intensity estimation. Then we identified the physiological sensor, feature set and machine learning model that give the best predictive performance. We found that EDA is the most information-rich sensor for continuous pain intensity prediction. A set of only 3 features from EDA signals using SVR model gave an average performance of 0.93 mean absolute error (MAE) and 1.16 root means square error (RMSE) for the subject-independent model and of 0.92 MAE and 1.13 RMSE for subject-dependent. The MAE achieved with signal-feature-model combination is less than 1 unit on 0 to 4 continues pain scale, which is smaller than the MAE achieved by the methods reported in the literature. These results demonstrate that it is possible to estimate pain intensity of a patient using a computationally inexpensive machine learning model with 3 statistical features from EDA signal which can be collected from a wrist biosensor. This method paves a way to developing a wearable pain measurement device. 
    more » « less
  2. Abstract

    In vivo fluorometers use chlorophyllafluorescence (Fchl) as a proxy to monitor phytoplankton biomass. However, the fluorescence yield ofFchlis affected by photoprotection processes triggered by increased irradiance (nonphotochemical quenching; NPQ), creating diurnal reductions inFchlthat may be mistaken for phytoplankton biomass reductions. Published correction methods are mostly designed for pelagic oceans and are ill suited for inland waters or for high‐frequency data collection. A machine learning‐based method was developed to correct vertical profiler data from an oligotrophic lake. NPQ was estimated as a percent reduction inFchlby comparing daytime values to mean, unquenched values from the previous night. A random forest regression was trained on sensor data collected coincident withFchl; including solar radiation, water temperature, depth, and dissolved oxygen saturation. The accuracy of the model was assessed using a grouped 10‐fold cross validation (mean absolute error [MAE]: 7.6%; root mean square error [RMSE]: 10.2%), which was then used to correctFchlprofiles. The model also predicted NPQ and corrected unseenFchlprofiles from a future period with excellent results (MAE: 9.0%; RMSE: 14.4%).Fchlprofiles were then correlated to laboratory results, allowing corrected profiles to be compared directly to collected samples. The correction reduced error (RMSE) due to NPQ from 0.67 μg L−1to 0.33 μg L−1when compared to uncorrectedFchldata. These results suggest that the use of machine learning models may be an effective way to correct for NPQ and may have universal applicability.

     
    more » « less
  3. Accurately predicting the performance of radiant slab systems can be challenging due to the large thermal capacitance of the radiant slab and room temperature stratification. Current methods for predicting heating and cooling energy consumption of hydronic radiant slabs include detailed first-principles (e.g., finite difference) and reduced-order (e.g., thermal Resistor-Capacitor (RC) network) models. Creating and calibrating detailed first-principles models, as well as detailed RC network models for predicting the performance of radiant slabs require substantial effort. To develop improved control, monitoring, and diagnostic methods, there is a need for simpler models that can be readily trained using in-situ measurements. In this study, we explored a novel hybrid modeling method that integrates a simple RC network model with an evolving learning-based algorithm termed the Growing Gaussian Mixture Regression (GGMR) modeling approach to predict the heating and cooling rates of a radiant slab system for a Living Laboratory office space. The RC network model predicts heating or cooling load of the radiant slab system that is provided as an input to the GGMR model. Three modeling approaches were considered in this study: 1) an RC network model; 2) a GGMR model, and 3) the proposed hybrid modeling between RC and GGMR. The three modeling methods have been compared for predicting the energy use of a radiant slab system of a Living Laboratory office space using measurement data from January 15th to March 7th, 2022. The first two weeks of data were used for training, while the remaining data was used for testing of all three modeling methods. The hybrid approach had a Normalized Root Mean Square Error (NRMSE) of 15.46 percent (8.62 percent less than the RC-Model 3 alone and 19.36 percent less than the GGMR alone), a Coefficient of Variation of RMSE (CVRMSE) of 6.43 percent (3.59 percent less than the RC-Model 3 and 8.05 percent less than the GGMR), a Mean Absolute Error (MAE) of 3.61 kW (2.13 kW and 3.87 kW less than the RC-Model 3 and GGMR, respectively), and a Mean Absolute Percentage Error (MAPE) of 5.28 percent (3.85 percent and 3.92 percent lower than the RC-Model 3 and GGMR, respectively). 
    more » « less
  4. Abstract

    Kelvin–Helmholtz instability (KH) waves have been broadly shown to affect the growth of hydrometeors within a region of falling precipitation, but formation and growth from KH waves at cloud top needs further attention. Here, we present detailed observations of cloud-top KH waves that produced a snow plume that extended to the surface. Airborne transects of cloud radar aligned with range height indicator scans from ground-based precipitation radar track the progression and intensity of the KH wave kinetics and precipitation. In situ cloud probes and surface disdrometer measurements are used to quantify the impact of the snow plume on the composition of an underlying supercooled liquid water (SLW) cloud and the snowfall observed at the surface. KH wavelengths of 1.5 km consisted of ∼750-m-wide up- and downdrafts. A distinct fluctus region appeared as a wave-breaking cloud top where the fastest updraft was observed to exceed 5 m s−1. Relatively weaker updrafts of 0.5–1.5 m s−1beneath the fluctus and partially overlapping the dendritic growth zone were associated with steep gradients in reflectivity of −5 to 20 dBZein as little as 500-m depths due to rapid growth of pristine planar ice crystals. The falling snow removed ∼80% of the SLW content from the underlying cloud and led to a twofold increase in surface liquid equivalent snowfall rate from 0.6 to 1.3 mm h−1. This paper presents the first known study of cloud-top KH waves producing snowfall with observations of increased snowfall rates at the surface.

     
    more » « less
  5. In-memory-computing (IMC) SRAM architecture has gained significant attention as it achieves high energy efficiency for computing a convolutional neural network (CNN) model [1]. Recent works investigated the use of analog-mixed-signal (AMS) hardware for high area and energy efficiency [2], [3]. However, AMS hardware output is well known to be susceptible to process, voltage, and temperature (PVT) variations, limiting the computing precision and ultimately the inference accuracy of a CNN. We reconfirmed, through the simulation of a capacitor-based IMC SRAM macro that computes a 256D binary dot product, that the AMS computing hardware has a significant root-mean-square error (RMSE) of 22.5% across the worst-case voltage, temperature (Fig. 16.1.1 top left) and 3-sigma process variations (Fig. 16.1.1 top right). On the other hand, we can implement an IMC SRAM macro using robust digital logic [4], which can virtually eliminate the variability issue (Fig. 16.1.1 top). However, digital circuits require more devices than AMS counterparts (e.g., 28 transistors for a mirror full adder [FA]). As a result, a recent digital IMC SRAM shows a lower area efficiency of 6368F2/b (22nm, 4b/4b weight/activation) [5] than the AMS counterpart (1170F2/b, 65nm, 1b/1b) [3]. In light of this, we aim to adopt approximate arithmetic hardware to improve area and power efficiency and present two digital IMC macros (DIMC) with different levels of approximation (Fig. 16.1.1 bottom left). Also, we propose an approximation-aware training algorithm and a number format to minimize inference accuracy degradation induced by approximate hardware (Fig. 16.1.1 bottom right). We prototyped a 28nm test chip: for a 1b/1b CNN model for CIFAR-10 and across 0.5-to-1.1V supply, the DIMC with double-approximate hardware (DIMC-D) achieves 2569F2/b, 932-2219TOPS/W, 475-20032GOPS, and 86.96% accuracy, while for a 4b/1b CNN model, the DIMC with the single-approximate hardware (DIMC-S) achieves 3814F2/b, 458-990TOPS/W 
    more » « less