Many problems in climate science require extracting forced signals from a background of internal climate variability. We demonstrate that artificial neural networks (ANNs) are a useful addition to the climate science “toolbox” for this purpose. Specifically, forced patterns are detected by an ANN trained on climate model simulations under historical and future climate scenarios. By identifying spatial patterns that serve as indicators of change in surface temperature and precipitation, the ANN can determine the approximate year from which the simulations came without first explicitly separating the forced signal from the noise of both internal climate variability and model uncertainty. Thus, the ANN indicator patterns are complex, nonlinear combinations of signal and noise and are identified from the 1960s onward in simulated and observed surface temperature maps. This approach suggests that viewing climate patterns through an artificial intelligence (AI) lens has the power to uncover new insights into climate variability and change.
Assessing forced climate change requires the extraction of the forced signal from the background of climate noise. Traditionally, tools for extracting forced climate change signals have focused on one atmospheric variable at a time, however, using multiple variables can reduce noise and allow for easier detection of the forced response. Following previous work, we train artificial neural networks to predict the year of single‐ and multi‐variable maps from forced climate model simulations. To perform this task, the neural networks learn patterns that allow them to discriminate between maps from different years—that is, the neural networks learn the patterns of the forced signal amidst the shroud of internal variability and climate model disagreement. When presented with combined input fields (multiple seasons, variables, or both), the neural networks are able to detect the signal of forced change earlier than when given single fields alone by utilizing complex, nonlinear relationships between multiple variables and seasons. We use layer‐wise relevance propagation, a neural network explainability tool, to identify the multivariate patterns learned by the neural networks that serve as reliable indicators of the forced response. These “indicator patterns” vary in time and between climate models, providing a template for investigating inter‐model differences in the time evolution of the forced response. This work demonstrates how neural networks and their explainability tools can be harnessed to identify patterns of the forced signal within combined fields.
more » « less- Award ID(s):
- 2019758
- NSF-PAR ID:
- 10512799
- Publisher / Repository:
- Wiley Online Library
- Date Published:
- Journal Name:
- Journal of Advances in Modeling Earth Systems
- Volume:
- 14
- Issue:
- 7
- ISSN:
- 1942-2466
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Many problems in climate science require the identification of signals obscured by both the “noise” of internal climate variability and differences across models. Following previous work, we train an artificial neural network (ANN) to predict the year of a given map of annual‐mean temperature (or precipitation) from forced climate model simulations. This prediction task requires the ANN to learn forced patterns of change amidst a background of climate noise and model differences. We then apply a neural network visualization technique (layerwise relevance propagation) to visualize the spatial patterns that lead the ANN to successfully predict the year. These spatial patterns thus serve as “reliable indicators” of the forced change. The architecture of the ANN is chosen such that these indicators vary in time, thus capturing the evolving nature of regional signals of change. Results are compared to those of more standard approaches like signal‐to‐noise ratios and multilinear regression in order to gain intuition about the reliable indicators identified by the ANN. We then apply an additional visualization tool (backward optimization) to highlight where disagreements in simulated and observed patterns of change are most important for the prediction of the year. This work demonstrates that ANNs and their visualization tools make a powerful pair for extracting climate patterns of forced change.
-
Abstract We show that explainable neural networks can identify regions of oceanic variability that contribute predictability on decadal timescales in a fully coupled Earth‐system model. The neural networks learn to use sea‐surface temperature anomalies to predict future continental surface temperature anomalies. We then use a neural‐network explainability method called layerwise relevance propagation to infer which oceanic patterns lead to accurate predictions made by the neural networks. In particular, regions within the North Atlantic Ocean and North Pacific Ocean lend the most predictability for surface temperature across continental North America. We apply the proposed methodology to decadal variability, although the concept is generalizable to other timescales of predictability. Furthermore, while our approach focuses on predictable patterns of internal variability within climate models, it should be generalizable to observational data as well. Our study contributes to the growing evidence that explainable neural networks are important tools for advancing geoscientific knowledge.
-
Even though human experience unfolds continuously in time, it is not strictly linear; instead, it entails cascading processes building hierarchical cognitive structures. For instance, during speech perception, humans transform a continuously varying acoustic signal into phonemes, words, and meaning, and these levels all have distinct but interdependent temporal structures. Time-lagged regression usingmore » « less
temporal response functions (TRFs ) has recently emerged as a promising tool for disentangling electrophysiological brain responses related to such complex models of perception. Here, we introduce the Eelbrain Python toolkit, which makes this kind of analysis easy and accessible. We demonstrate its use, using continuous speech as a sample paradigm, with a freely available EEG dataset of audiobook listening. A companion GitHub repository provides the complete source code for the analysis, from raw data to group-level statistics. More generally, we advocate a hypothesis-driven approach in which the experimenter specifies a hierarchy of time-continuous representations that are hypothesized to have contributed to brain responses, and uses those as predictor variables for the electrophysiological signal. This is analogous to a multiple regression problem, but with the addition of a time dimension. TRF analysis decomposes the brain signal into distinct responses associated with the different predictor variables by estimating a multivariate TRF (mTRF), quantifying the influence of each predictor on brain responses as a function of time(-lags). This allows asking two questions about the predictor variables: (1) Is there a significant neural representation corresponding to this predictor variable? And if so, (2) what are the temporal characteristics of the neural response associated with it? Thus, different predictor variables can be systematically combined and evaluated to jointly model neural processing at multiple hierarchical levels. We discuss applications of this approach, including the potential for linking algorithmic/representational theories at different cognitive levels to brain responses through computational models with appropriate linking hypotheses. -
Abstract Few studies have utilized machine learning techniques to predict or understand the Madden‐Julian oscillation (MJO), a key source of subseasonal variability and predictability. Here, we present a simple framework for real‐time MJO prediction using shallow artificial neural networks (ANNs). We construct two ANN architectures, one deterministic and one probabilistic, that predict a real‐time MJO index using maps of tropical variables. These ANNs make skillful MJO predictions out to ∼18 days in October‐March and ∼11 days in April‐September, outperforming conventional linear models and efficiently capturing aspects of MJO predictability found in more complex, dynamical models. The flexibility and explainability of simple ANN frameworks are highlighted through varying model input and applying ANN explainability techniques that reveal sources and regions important for ANN prediction skill. The accessibility, performance, and efficiency of this simple machine learning framework is more broadly applicable to predict and understand other Earth system phenomena.