skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Using Deep Learning to Nowcast the Spatial Coverage of Convection from Himawari-8 Satellite Data
Abstract Predicting the timing and location of thunderstorms (“convection”) allows for preventive actions that can save both lives and property. We have applied U-nets, a deep-learning-based type of neural network, to forecast convection on a grid at lead times up to 120 min. The goal is to make skillful forecasts with only present and past satellite data as predictors. Specifically, predictors are multispectral brightness-temperature images from theHimawari-8satellite, while targets (ground truth) are provided by weather radars in Taiwan. U-nets are becoming popular in atmospheric science due to their advantages for gridded prediction. Furthermore, we use three novel approaches to advance U-nets in atmospheric science. First, we compare three architectures—vanilla, temporal, and U-net++—and find that vanilla U-nets are best for this task. Second, we train U-nets with the fractions skill score, which is spatially aware, as the loss function. Third, because we do not have adequate ground truth over the fullHimawari-8domain, we train the U-nets with small radar-centered patches, then apply trained U-nets to the full domain. Also, we find that the best predictions are given by U-nets trained with satellite data from multiple lag times, not only the present. We evaluate U-nets in detail—by time of day, month, and geographic location—and compare them to persistence models. The U-nets outperform persistence at lead times ≥ 60 min, and at all lead times the U-nets provide a more realistic climatology than persistence. Our code is available publicly.  more » « less
Award ID(s):
2019758
PAR ID:
10302440
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
American Meteorological Society
Date Published:
Journal Name:
Monthly Weather Review
Volume:
149
Issue:
12
ISSN:
0027-0644
Page Range / eLocation ID:
p. 3897-3921
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract The prediction of large fluctuations in the ground magnetic field (dB/dt) is essential for preventing damage from Geomagnetically Induced Currents. Directly forecasting these fluctuations has proven difficult, but accurately determining the risk of extreme events can allow for the worst of the damage to be prevented. Here we trained Convolutional Neural Network models for eight mid‐latitude magnetometers to predict the probability thatdB/dtwill exceed the 99th percentile threshold 30–60 min in the future. Two model frameworks were compared, a model trained using solar wind data from the Advanced Composition Explorer (ACE) satellite, and another model trained on both ACE and SuperMAG ground magnetometer data. The models were compared to examine if the addition of current ground magnetometer data significantly improved the forecasts ofdB/dtin the future prediction window. A bootstrapping method was employed using a random split of the training and validation data to provide a measure of uncertainty in model predictions. The models were evaluated on the ground truth data during eight geomagnetic storms and a suite of evaluation metrics are presented. The models were also compared to a persistence model to ensure that the model using both datasets did not over‐rely ondB/dtvalues in making its predictions. Overall, we find that the models using both the solar wind and ground magnetometer data had better metric scores than the solar wind only and persistence models, and was able to capture more spatially localized variations in thedB/dtthreshold crossings. 
    more » « less
  2. Abstract Hailstorms cause billions of dollars in damage across the United States each year. Part of this cost could be reduced by increasing warning lead times. To contribute to this effort, we developed a nowcasting machine learning model that uses a 3D U-Net to produce gridded severe hail nowcasts for up to 40 min in advance. The three U-Net dimensions uniquely incorporate one temporal and two spatial dimensions. Our predictors consist of a combination of output from the National Severe Storms Laboratory Warn-on-Forecast System (WoFS) numerical weather prediction ensemble and remote sensing observations from Vaisala’s National Lightning Detection Network (NLDN). Ground truth for prediction was derived from the maximum expected size of hail calculated from the gridded NEXRAD WSR-88D radar (GridRad) dataset. Our U-Net was evaluated by comparing its test set performance against rigorous hail nowcasting baselines. These baselines included WoFS ensemble Hail and Cloud Growth Model (HAILCAST) and a logistic regression model trained on WoFS 2–5-km updraft helicity. The 3D U-Net outperformed both these baselines for all forecast period time steps. Its predictions yielded a neighborhood maximum critical success index (max CSI) of ∼0.48 and ∼0.30 at forecast minutes 20 and 40, respectively. These max CSIs exceeded the ensemble HAILCAST max CSIs by as much as ∼0.35. The NLDN observations were found to increase the U-Net performance by more than a factor of 4 at some time steps. This system has shown success when nowcasting hail during complex severe weather events, and if used in an operational environment, may prove valuable. 
    more » « less
  3. Recent advancements in two-photon calcium imaging have enabled scientists to record the activity of thousands of neurons with cellular resolution. This scope of data collection is crucial to understanding the next generation of neuroscience questions, but analyzing these large recordings requires automated methods for neuron segmentation. Supervised methods for neuron segmentation achieve state of-the-art accuracy and speed but currently require large amounts of manually generated ground truth training labels. We reduced the required number of training labels by designing a semi-supervised pipeline. Our pipeline used neural network ensembling to generate pseudolabels to train a single shallow U-Net. We tested our method on three publicly available datasets and compared our performance to three widely used segmentation methods. Our method outperformed other methods when trained on a small number of ground truth labels and could achieve state-of-the-art accuracy after training on approximately a quarter of the number of ground truth labels as supervised methods. When trained on many ground truth labels, our pipeline attained higher accuracy than that of state-of-the-art methods. Overall, our work will help researchers accurately process large neural recordings while minimizing the time and effort needed to generate manual labels. 
    more » « less
  4. Ground truth depth information is necessary for many computer vision tasks. Collecting this information is chal-lenging, especially for outdoor scenes. In this work, we propose utilizing single-view depth prediction neural networks pre-trained on synthetic scenes to generate relative depth, which we call pseudo-depth. This approach is a less expen-sive option as the pre-trained neural network obtains ac-curate depth information from synthetic scenes, which does not require any expensive sensor equipment and takes less time. We measure the usefulness of pseudo-depth from pre-trained neural networks by training indoor/outdoor binary classifiers with and without it. We also compare the difference in accuracy between using pseudo-depth and ground truth depth. We experimentally show that adding pseudo-depth to training achieves a 4.4% performance boost over the non-depth baseline model on DIODE, a large stan-dard test dataset, retaining 63.8% of the performance boost achieved from training a classifier on RGB and ground truth depth. It also boosts performance by 1.3% on another dataset, SUN397, for which ground truth depth is not avail-able. Our result shows that it is possible to take information obtained from a model pre-trained on synthetic scenes and successfully apply it beyond the synthetic domain to real-world data. 
    more » « less
  5. For people with visual impairments, photography is essential in identifying objects through remote sighted help and image recognition apps. This is especially the case for teachable object recognizers, where recognition models are trained on user's photos. Here, we propose real-time feedback for communicating the location of an object of interest in the camera frame. Our audio-haptic feedback is powered by a deep learning model that estimates the object center location based on its proximity to the user's hand. To evaluate our approach, we conducted a user study in the lab, where participants with visual impairments (N=9) used our feedback to train and test their object recognizer in vanilla and cluttered environments. We found that very few photos did not include the object (2% in the vanilla and 8% in the cluttered) and the recognition performance was promising even for participants with no prior camera experience. Participants tended to trust the feedback even though they know it can be wrong. Our cluster analysis indicates that better feedback is associated with photos that include the entire object. Our results provide insights into factors that can degrade feedback and recognition performance in teachable interfaces. 
    more » « less