skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Title: Leveraging spatiotemporal information in meteorological image sequences: From feature engineering to neural networks
Abstract

Atmospheric processes involve both space and time. Thus, humans looking at atmospheric imagery can often spot important signals in an animated loop of an image sequence not apparent in an individual (static) image. Utilizing such signals with automated algorithms requires the ability to identify complex spatiotemporal patterns in image sequences. That is a very challenging task due to the endless possibilities of patterns in both space and time. Here, we review different concepts and techniques that are useful to extract spatiotemporal signals from meteorological image sequences to expand the effectiveness of AI algorithms for classification and prediction tasks. We first present two applications that motivate the need for these approaches in meteorology, namely the detection of convection from satellite imagery and solar forecasting. Then we provide an overview of concepts and techniques that are helpful for the interpretation of meteorological image sequences, such as (a) feature engineering methods using (i) meteorological knowledge, (ii) classic image processing, (iii) harmonic analysis, and (iv) topological data analysis; (b) ways to use convolutional neural networks for this purpose with emphasis on discussing different convolution filters (2D/3D/LSTM-convolution); and (c) a brief survey of several other concepts, including the concept of “attention” in neural networks and its utility for the interpretation of image sequences and strategies from self-supervised and transfer learning to reduce the need for large labeled datasets. We hope that presenting an overview of these tools—many of which are not new but underutilized in this context—will accelerate progress in this area.

 
more » « less
Award ID(s):
2019758
PAR ID:
10512590
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Cambridge University Press
Date Published:
Journal Name:
Environmental Data Science
Volume:
2
ISSN:
2634-4602
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract The method of neural networks (aka deep learning) has opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image-to-image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks for working with meteorological images, such as best practices for evaluation, tuning, and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of receptive fields, underutilized meteorological performance measures, and methods for neural network interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative meteorologist-driven discovery process that builds on experimental design and hypothesis generation and testing. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image-to-image translation. 
    more » « less
  2. The convolution operation plays a vital role in a wide range of critical algorithms across various domains, such as digital image processing, convolutional neural networks, and quantum machine learning. In existing implementations, particularly in quantum neural networks, convolution operations are usually approximated by the application of filters with data strides that are equal to the filter window sizes. One challenge with these implementations is preserving the spatial and temporal localities of the input features, specifically for data with higher dimensions. In addition, the deep circuits required to perform quantum convolution with a unity stride, especially for multidimensional data, increase the risk of violating decoherence constraints. In this work, we propose depth-optimized circuits for performing generalized multidimensional quantum convolution operations with unity stride targeting applications that process data with high dimensions, such as hyperspectral imagery and remote sensing. We experimentally evaluate and demonstrate the applicability of the proposed techniques by using real-world, high-resolution, multidimensional image data on a state-of-the-art quantum simulator from IBM Quantum.

     
    more » « less
  3. Abstract The scientific community has expressed interest in the potential of phased array radars (PARs) to observe the atmosphere with finer spatial and temporal scales. Although convergence has occurred between the meteorological and engineering communities, the need exists to increase access of PAR to meteorologists. Here, we facilitate these interdisciplinary efforts in the field of ground-based PARs for atmospheric studies. We cover high-level technical concepts and terminology for PARs as applied to studies of the atmosphere. A historical perspective is provided as context along with an overview of PAR system architectures, technical challenges, and opportunities. Envisioned scan strategies are summarized because they are distinct from traditional mechanically scanned radars and are the most advantageous for high-resolution studies of the atmosphere. Open access to PAR data is emphasized as a mechanism to educate the future generation of atmospheric scientists. Finally, a vision for the future of operational networks, research facilities, and expansion into complementary radar wavelengths is provided. 
    more » « less
  4. Abstract. Permafrost thaw has been observed at several locations across the Arctic tundra in recent decades; however, the pan-Arctic extent and spatiotemporal dynamics of thaw remains poorly explained. Thaw-induced differential ground subsidence and dramatic microtopographic transitions, such as transformation of low-centered ice-wedge polygons (IWPs) into high-centered IWPs can be characterized using very high spatial resolution (VHSR) commercial satellite imagery. Arctic researchers demand for an accurate estimate of the distribution of IWPs and their status across the tundra domain. The entire Arctic has been imaged in 0.5 m resolution by commercial satellite sensors; however, mapping efforts are yet limited to small scales and confined to manual or semi-automated methods. Knowledge discovery through artificial intelligence (AI), big imagery, and high performance computing (HPC) resources is just starting to be realized in Arctic science. Large-scale deployment of VHSR imagery resources requires sophisticated computational approaches to automated image interpretation coupled with efficient use of HPC resources. We are in the process of developing an automated Mapping Application for Permafrost Land Environment (MAPLE) by combining big imagery, AI, and HPC resources. The MAPLE uses deep learning (DL) convolutional neural nets (CNNs) algorithms on HPCs to automatically map IWPs from VHSR commercial satellite imagery across large geographic domains. We trained and tasked a DLCNN semantic object instance segmentation algorithm to automatically classify IWPs from VHSR satellite imagery. Overall, our findings demonstrate the robust performances of IWP mapping algorithm in diverse tundra landscapes and lay a firm foundation for its operational-level application in repeated documentation of circumpolar permafrost disturbances.

     
    more » « less
  5. Synthetic data is highly useful for training machine learning systems performing image-based 3D reconstruction, as synthetic data has applications in both extending existing generalizable datasets and being tailored to train neural networks for specific learning tasks of interest. In this paper, we introduce and utilize a synthetic data generation suite capable of generating data given existing 3D scene models as input. Specifically, we use our tool to generate image sequences for use with Multi-View Stereo (MVS), moving a camera through the virtual space according to user-chosen camera parameters. We evaluate how the given camera parameters and type of 3D environment affect how applicable the generated image sequences are to the MVS task using five pre-trained neural networks on image sequences generated from three different 3D scene datasets. We obtain generated predictions for each combination of parameter value and input image sequence, using standard error metrics to analyze the differences in depth predictions on image sequences across 3D datasets, parameters, and networks. Among other results, we find that camera height and vertical camera viewing angle are the parameters that cause the most variation in depth prediction errors on these image sequences. 
    more » « less