skip to main content


This content will become publicly available on October 1, 2024

Title: A Deep Learning Filter for the Intraseasonal Variability of the Tropics
Abstract

This paper presents a novel application of convolutional neural network (CNN) models for filtering the intraseasonal variability of the tropical atmosphere. In this deep learning filter, two convolutional layers are applied sequentially in a supervised machine learning framework to extract the intraseasonal signal from the total daily anomalies. The CNN-based filter can be tailored for each field similarly to fast Fourier transform filtering methods. When applied to two different fields (zonal wind stress and outgoing longwave radiation), the index of agreement between the filtered signal obtained using the CNN-based filter and a conventional weight-based filter is between 95% and 99%. The advantage of the CNN-based filter over the conventional filters is its applicability to time series with the length comparable to the period of the signal being extracted.

Significance Statement

This study proposes a new method for discovering hidden connections in data representative of tropical atmosphere variability. The method makes use of an artificial intelligence (AI) algorithm that combines a mathematical operation known as convolution with a mathematical model built to reflect the behavior of the human brain known as artificial neural network. Our results show that the filtered data produced by the AI-based method are consistent with the results obtained using conventional mathematical algorithms. The advantage of the AI-based method is that it can be applied to cases for which the conventional methods have limitations, such as forecast (hindcast) data or real-time monitoring of tropical variability in the 20–100-day range.

 
more » « less
Award ID(s):
2018631
NSF-PAR ID:
10472790
Author(s) / Creator(s):
;
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Artificial Intelligence for the Earth Systems
Volume:
2
Issue:
4
ISSN:
2769-7525
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Motivation

    Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models.

    Results

    We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems.

    Availability and implementation

    The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN.

    Supplementary information

    Supplementary data are available at Bioinformatics online.

     
    more » « less
  2. Purpose

    To develop an improved k‐space reconstruction method using scan‐specific deep learning that is trained on autocalibration signal (ACS) data.

    Theory

    Robust artificial‐neural‐networks for k‐space interpolation (RAKI) reconstruction trains convolutional neural networks on ACS data. This enables nonlinear estimation of missing k‐space lines from acquired k‐space data with improved noise resilience, as opposed to conventional linear k‐space interpolation‐based methods, such as GRAPPA, which are based on linear convolutional kernels.

    Methods

    The training algorithm is implemented using a mean square error loss function over the target points in the ACS region, using a gradient descent algorithm. The neural network contains 3 layers of convolutional operators, with 2 of these including nonlinear activation functions. The noise performance and reconstruction quality of the RAKI method was compared with GRAPPA in phantom, as well as in neurological and cardiac in vivo data sets.

    Results

    Phantom imaging shows that the proposed RAKI method outperforms GRAPPA at high (≥4) acceleration rates, both visually and quantitatively. Quantitative cardiac imaging shows improved noise resilience at high acceleration rates (rate 4:23% and rate 5:48%) over GRAPPA. The same trend of improved noise resilience is also observed in high‐resolution brain imaging at high acceleration rates.

    Conclusion

    The RAKI method offers a training database‐free deep learning approach for MRI reconstruction, with the potential to improve many existing reconstruction approaches, and is compatible with conventional data acquisition protocols.

     
    more » « less
  3. Abstract

    Two distinct features of anthropogenic climate change, warming in the tropical upper troposphere and warming at the Arctic surface, have competing effects on the midlatitude jet stream’s latitudinal position, often referred to as a “tug-of-war.” Studies that investigate the jet’s response to these thermal forcings show that it is sensitive to model type, season, initial atmospheric conditions, and the shape and magnitude of the forcing. Much of this past work focuses on studying a simulation’s response to external manipulation. In contrast, we explore the potential to train a convolutional neural network (CNN) on internal variability alone and then use it to examine possible nonlinear responses of the jet to tropospheric thermal forcing that more closely resemble anthropogenic climate change. Our approach leverages the idea behind the fluctuation–dissipation theorem, which relates the internal variability of a system to its forced response but so far has been only used to quantify linear responses. We train a CNN on data from a long control run of the CESM dry dynamical core and show that it is able to skillfully predict the nonlinear response of the jet to sustained external forcing. The trained CNN provides a quick method for exploring the jet stream sensitivity to a wide range of tropospheric temperature tendencies and, considering that this method can likely be applied to any model with a long control run, could be useful for early-stage experiment design.

     
    more » « less
  4. Abstract

    A novel computer vision‐based meteor head echo detection algorithm is developed to study meteor fluxes and their physical properties, including initial range, range coverage, and radial velocity. The proposed Algorithm for Head Echo Automatic Detection (AHEAD) comprises a feature extraction function and a Convolutional Neural Network (CNN). The former is tailored to identify meteor head echoes, and then a CNN is employed to remove false alarms. In the testing of meteor data collected with the Jicamarca 50 MHz incoherent scatter radar, the new algorithm detects over 180 meteors per minute at dawn, which is 2 to 10 times more sensitive than prior manual or algorithmic approaches, with a false alarm rate less than 1 percent. The present work lays the foundation of developing a fully automatic AI‐meteor package that detects, analyzes, and distinguishes among many types of meteor echoes. Furthermore, although initially evaluated for meteor data collected with the Jicamarca VHF incoherent radar, the new algorithm is generic enough that can be applied to other facilities with minor modifications. The CNN removes up to 98 percent of false alarms according to the testing set. We also present and discuss the physical characteristics of meteors detected with AHEAD, including flux rate, initial range, line of sight velocity, Signal‐to‐Noise Ratio, and noise characteristics. Our results indicate that stronger meteor echoes are detected at a slightly lower altitude and lower radial velocity than other meteors.

     
    more » « less
  5. To obtain more consistent measurements through the course of a wheat growing season, we conceived and designed an autonomous robotic platform that performs collision avoidance while navigating in crop rows using spatial artificial intelligence (AI). The main constraint the agronomists have is to not run over the wheat while driving. Accordingly, we have trained a spatial deep learning model that helps navigate the robot autonomously in the field while avoiding collisions with the wheat. To train this model, we used publicly available databases of prelabeled images of wheat, along with the images of wheat that we have collected in the field. We used the MobileNet single shot detector (SSD) as our deep learning model to detect wheat in the field. To increase the frame rate for real-time robot response to field environments, we trained MobileNet SSD on the wheat images and used a new stereo camera, the Luxonis Depth AI Camera. Together, the newly trained model and camera could achieve a frame rate of 18–23 frames per second (fps)—fast enough for the robot to process its surroundings once every 2–3 inches of driving. Once we knew the robot accurately detects its surroundings, we addressed the autonomous navigation of the robot. The new stereo camera allows the robot to determine its distance from the trained objects. In this work, we also developed a navigation and collision avoidance algorithm that utilizes this distance information to help the robot see its surroundings and maneuver in the field, thereby precisely avoiding collisions with the wheat crop. Extensive experiments were conducted to evaluate the performance of our proposed method. We also compared the quantitative results obtained by our proposed MobileNet SSD model with those of other state-of-the-art object detection models, such as the YOLO V5 and Faster region-based convolutional neural network (R-CNN) models. The detailed comparative analysis reveals the effectiveness of our method in terms of both model precision and inference speed.

     
    more » « less