Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            Abstract The Southern Ocean is a region of high surface nutrient content, reflecting an inefficient biological carbon pump. The variability, predictability, and causes of changes in these nutrient levels on interannual to decadal time scales remain unclear. We employ a deep learning approach, specifically a Temporal Convolution Attention Neural Network (TCANN), to conduct multi‐year forecasting of surface based on oceanic physical drivers. The TCANN successfully replicates testing data with a prediction skill extending to at least 4 years with the GFDL‐ESM4‐driven model and 1 year with the observation‐driven model. To benchmark the results, we compare the prediction skill of TCANN with a simple persistence model and two regression methods, a linear regression and a ridge regression. The TCANN model was able to predict variability with a higher skill than persistence and the two regression methods indicating that non‐linearities present in the system become too high to predict inter‐annual variability with traditional regression methods. To enhance the interpretability of the predictions, we explore three explainable AI techniques: occlusion analysis, integrated gradients, and Gradient Shap. The outcomes suggest a crucial role played by salinity processes and buoyancy/potential density fluxes on the prediction of on annual time scales. The deep learning tools' ability to provide skillful forecasts well into the future presents a promising avenue for gaining insights into how the Southern Ocean's surface nutrients respond to climate change based on physical quantities.more » « lessFree, publicly-accessible full text available June 1, 2026
- 
            Abstract Artificial neural networks are increasingly used for geophysical modeling to extract complex nonlinear patterns from geospatial data. However, it is difficult to understand how networks make predictions, limiting trust in the model, debugging capacity, and physical insights. EXplainable Artificial Intelligence (XAI) techniques expose how models make predictions, but XAI results may be influenced by correlated features. Geospatial data typically exhibit substantial autocorrelation. With correlated input features, learning methods can produce many networks that achieve very similar performance (e.g., arising from different initializations). Since the networks capture different relationships, their attributions can vary. Correlated features may also cause inaccurate attributions because XAI methods typically evaluate isolated features, whereas networks learn multifeature patterns. Few studies have quantitatively analyzed the influence of correlated features on XAI attributions. We use a benchmark framework of synthetic data with increasingly strong correlation, for which the ground truth attribution is known. For each dataset, we train multiple networks and compare XAI-derived attributions to the ground truth. We show that correlation may dramatically increase the variance of the derived attributions, and investigate the cause of the high variance: is it because different trained networks learn highly different functions or because XAI methods become less faithful in the presence of correlation? Finally, we show XAI applied to superpixels, instead of single grid cells, substantially decreases attribution variance. Our study is the first to quantify the effects of strong correlation on XAI, to investigate the reasons that underlie these effects, and to offer a promising way to address them.more » « lessFree, publicly-accessible full text available January 1, 2026
- 
            Abstract Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori , to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems.more » « less
- 
            Abstract Climate-driven changes in precipitation amounts and their seasonal variability are expected in many continental-scale regions during the remainder of the 21st century. However, much less is known about future changes in the predictability of seasonal precipitation, an important earth system property relevant for climate adaptation. Here, on the basis of CMIP6 models that capture the present-day teleconnections between seasonal precipitation and previous-season sea surface temperature (SST), we show that climate change is expected to alter the SST-precipitation relationships and thus our ability to predict seasonal precipitation by 2100. Specifically, in the tropics, seasonal precipitation predictability from SSTs is projected to increase throughout the year, except the northern Amazonia during boreal winter. Concurrently, in the extra-tropics predictability is likely to increase in central Asia during boreal spring and winter. The altered predictability, together with enhanced interannual variability of seasonal precipitation, poses new opportunities and challenges for regional water management.more » « less
- 
            Abstract Many of our generation’s most pressing environmental science problems are wicked problems, which means they cannot be cleanly isolated and solved with a single ‘correct’ answer (e.g., Rittel 1973; Wirz 2021). The NSF AI Institute for Research on Trustworthy AI in Weather, Climate, and Coastal Oceanography (AI2ES) seeks to address such problems by developing synergistic approaches with a team of scientists from three disciplines: environmental science (including atmospheric, ocean, and other physical sciences), AI, and social science including risk communication. As part of our work, we developed a novel approach to summer school, held from June 27-30, 2022. The goal of this summer school was to teach a new generation of environmental scientists how to cross disciplines and develop approaches that integrate all three disciplinary perspectives and approaches in order to solve environmental science problems. In addition to a lecture series that focused on the synthesis of AI, environmental science, and risk communication, this year’s summer school included a unique Trust-a-thon component where participants gained hands-on experience applying both risk communication and explainable AI techniques to pre-trained ML models. We had 677 participants from 63 countries register and attend online. Lecture topics included trust and trustworthiness (Day 1), explainability and interpretability (Day 2), data and workflows (Day 3), and uncertainty quantification (Day 4). For the Trust-a-thon we developed challenge problems for three different application domains: (1) severe storms, (2) tropical cyclones, and (3) space weather. Each domain had associated user persona to guide user-centered development.more » « less
- 
            Abstract Despite the increasingly successful application of neural networks to many problems in the geosciences, their complex and nonlinear structure makes the interpretation of their predictions difficult, which limits model trust and does not allow scientists to gain physical insights about the problem at hand. Many different methods have been introduced in the emerging field of eXplainable Artificial Intelligence (XAI), which aims at attributing the network’s prediction to specific features in the input domain. XAI methods are usually assessed by using benchmark datasets (such as MNIST or ImageNet for image classification). However, an objective, theoretically derived ground truth for the attribution is lacking for most of these datasets, making the assessment of XAI in many cases subjective. Also, benchmark datasets specifically designed for problems in geosciences are rare. Here, we provide a framework, based on the use of additively separable functions, to generate attribution benchmark datasets for regression problems for which the ground truth of the attribution is known a priori. We generate a large benchmark dataset and train a fully connected network to learn the underlying function that was used for simulation. We then compare estimated heatmaps from different XAI methods to the ground truth in order to identify examples where specific XAI methods perform well or poorly. We believe that attribution benchmarks as the ones introduced herein are of great importance for further application of neural networks in the geosciences, and for more objective assessment and accurate implementation of XAI methods, which will increase model trust and assist in discovering new science.more » « less
- 
            Abstract Methods of explainable artificial intelligence (XAI) are used in geoscientific applications to gain insights into the decision-making strategy of neural networks (NNs), highlighting which features in the input contribute the most to a NN prediction. Here, we discuss our “lesson learned” that the task of attributing a prediction to the input does not have a single solution. Instead, the attribution results depend greatly on the considered baseline that the XAI method utilizes—a fact that has been overlooked in the geoscientific literature. The baseline is a reference point to which the prediction is compared so that the prediction can be understood. This baseline can be chosen by the user or is set by construction in the method’s algorithm—often without the user being aware of that choice. We highlight that different baselines can lead to different insights for different science questions and, thus, should be chosen accordingly. To illustrate the impact of the baseline, we use a large ensemble of historical and future climate simulations forced with the shared socioeconomic pathway 3-7.0 (SSP3-7.0) scenario and train a fully connected NN to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We then use various XAI methods and different baselines to attribute the network predictions to the input. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions. We conclude by discussing important implications and considerations about the use of baselines in XAI research. Significance StatementIn recent years, methods of explainable artificial intelligence (XAI) have found great application in geoscientific applications, because they can be used to attribute the predictions of neural networks (NNs) to the input and interpret them physically. Here, we highlight that the attributions—and the physical interpretation—depend greatly on the choice of the baseline—a fact that has been overlooked in the geoscientific literature. We illustrate this dependence for a specific climate task, in which a NN is trained to predict the ensemble- and global-mean temperature (i.e., the forced global warming signal) given an annual temperature map from an individual ensemble member. We show that attributions differ substantially when considering different baselines, because they correspond to answering different science questions.more » « less
- 
            null (Ed.)Abstract Spectral PCA (sPCA), in contrast to classical PCA, offers the advantage of identifying organized spatiotemporal patterns within specific frequency bands and extracting dynamical modes. However, the unavoidable trade-off between frequency resolution and robustness of the PCs leads to high sensitivity to noise and overfitting, which limits the interpretation of the sPCA results. We propose herein a simple nonparametric implementation of sPCA using the continuous analytic Morlet wavelet as a robust estimator of the cross-spectral matrices with good frequency resolution. To improve the interpretability of the results, especially when several modes of similar amplitude exist within the same frequency band, we propose a rotation of the complex-valued eigenvectors to optimize their spatial regularity (smoothness). The developed method, called rotated spectral PCA (rsPCA), is tested on synthetic data simulating propagating waves and shows impressive performance even with high levels of noise in the data. Applied to global historical geopotential height (GPH) and sea surface temperature (SST) daily time series, the method accurately captures patterns of atmospheric Rossby waves at high frequencies (3–60-day periods) in both GPH and SST and El Niño–Southern Oscillation (ENSO) at low frequencies (2–7-yr periodicity) in SST. At high frequencies the rsPCA successfully unmixes the identified waves, revealing spatially coherent patterns with robust propagation dynamics.more » « less
- 
            null (Ed.)Abstract Understanding the physical drivers of seasonal hydroclimatic variability and improving predictive skill remains a challenge with important socioeconomic and environmental implications for many regions around the world. Physics-based deterministic models show limited ability to predict precipitation as the lead time increases, due to imperfect representation of physical processes and incomplete knowledge of initial conditions. Similarly, statistical methods drawing upon established climate teleconnections have low prediction skill due to the complex nature of the climate system. Recently, promising data-driven approaches have been proposed, but they often suffer from overparameterization and overfitting due to the short observational record, and they often do not account for spatiotemporal dependencies among covariates (i.e., predictors such as sea surface temperatures). This study addresses these challenges via a predictive model based on a graph-guided regularizer that simultaneously promotes similarity of predictive weights for highly correlated covariates and enforces sparsity in the covariate domain. This approach both decreases the effective dimensionality of the problem and identifies the most predictive features without specifying them a priori. We use large ensemble simulations from a climate model to construct this regularizer, reducing the structural uncertainty in the estimation. We apply the learned model to predict winter precipitation in the southwestern United States using sea surface temperatures over the entire Pacific basin, and demonstrate its superiority compared to other regularization approaches and statistical models informed by known teleconnections. Our results highlight the potential to combine optimally the space–time structure of predictor variables learned from climate models with new graph-based regularizers to improve seasonal prediction.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
