Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Assessing forced climate change requires the extraction of the forced signal from the background of climate noise. Traditionally, tools for extracting forced climate change signals have focused on one atmospheric variable at a time, however, using multiple variables can reduce noise and allow for easier detection of the forced response. Following previous work, we train artificial neural networks to predict the year of single‐ and multi‐variable maps from forced climate model simulations. To perform this task, the neural networks learn patterns that allow them to discriminate between maps from different years—that is, the neural networks learn the patterns of the forced signal amidst the shroud of internal variability and climate model disagreement. When presented with combined input fields (multiple seasons, variables, or both), the neural networks are able to detect the signal of forced change earlier than when given single fields alone by utilizing complex, nonlinear relationships between multiple variables and seasons. We use layer‐wise relevance propagation, a neural network explainability tool, to identify the multivariate patterns learned by the neural networks that serve as reliable indicators of the forced response. These “indicator patterns” vary in time and between climate models, providing a template for investigating inter‐model differences in the time evolution of the forced response. This work demonstrates how neural networks and their explainability tools can be harnessed to identify patterns of the forced signal within combined fields.more » « less
-
Abstract We develop and demonstrate a new interpretable deep learning model specifically designed for image analysis in Earth system science applications. The neural network is designed to be inherently interpretable, rather than explained via post hoc methods. This is achieved by training the network to identify parts of training images that act as prototypes for correctly classifying unseen images. The new network architecture extends the interpretable prototype architecture of a previous study in computer science to incorporate absolute location. This is useful for Earth system science where images are typically the result of physics-based processes, and the information is often geolocated. Although the network is constrained to only learn via similarities to a small number of learned prototypes, it can be trained to exhibit only a minimal reduction in accuracy relative to noninterpretable architectures. We apply the new model to two Earth science use cases: a synthetic dataset that loosely represents atmospheric high and low pressure systems, and atmospheric reanalysis fields to identify the state of tropical convective activity associated with the Madden–Julian oscillation. In both cases, we demonstrate that considering absolute location greatly improves testing accuracies when compared with a location-agnostic method. Furthermore, the network architecture identifies specific historical dates that capture multivariate, prototypical behavior of tropical climate variability. Significance StatementMachine learning models are incredibly powerful predictors but are often opaque “black boxes.” The how-and-why the model makes its predictions is inscrutable—the model is not interpretable. We introduce a new machine learning model specifically designed for image analysis in Earth system science applications. The model is designed to be inherently interpretable and extends previous work in computer science to incorporate location information. This is important because images in Earth system science are typically the result of physics-based processes, and the information is often map based. We demonstrate its use for two Earth science use cases and show that the interpretable network exhibits only a small reduction in accuracy relative to black-box models.more » « less
An official website of the United States government
