The Madden‐Julian oscillation (MJO) is the leading source of global subseasonal predictability; however, many dynamical forecasting systems struggle to predict MJO propagation through the Maritime Continent. Better understanding the biases in simulated physical processes associated with MJO propagation is the key to improve MJO prediction. In this study, MJO prediction skill, propagation processes, and mean state biases are evaluated in reforecasts from models participating in the Subseasonal Experiment (SubX) and Subseasonal to Seasonal (S2S) prediction projects. SubX and S2S reforecasts show MJO prediction skill out to 4.5 weeks based on the Real‐time Multivariate MJO index consistent with previous studies. However, a closer examination of these models' representation of MJO propagation through the Maritime Continent reveals that they fail to predict the MJO convection, associated circulations, and moisture advection processes beyond 10 days with most of models underestimating MJO amplitude. The biases in the MJO propagation can be partly associated with the following mean biases across the Indo‐Pacific: a drier low troposphere, excess surface precipitation, more frequent occurrence of light precipitation rates, and a transition to stronger precipitation rates at lower humidity than in observations. This indicates that deep convection occurs too frequently in models and is not sufficiently inhibited when tropospheric moisture is low, which is likely due to the representation of entrainment.
Few studies have utilized machine learning techniques to predict or understand the Madden‐Julian oscillation (MJO), a key source of subseasonal variability and predictability. Here, we present a simple framework for real‐time MJO prediction using shallow artificial neural networks (ANNs). We construct two ANN architectures, one deterministic and one probabilistic, that predict a real‐time MJO index using maps of tropical variables. These ANNs make skillful MJO predictions out to ∼18 days in October‐March and ∼11 days in April‐September, outperforming conventional linear models and efficiently capturing aspects of MJO predictability found in more complex, dynamical models. The flexibility and explainability of simple ANN frameworks are highlighted through varying model input and applying ANN explainability techniques that reveal sources and regions important for ANN prediction skill. The accessibility, performance, and efficiency of this simple machine learning framework is more broadly applicable to predict and understand other Earth system phenomena.
more » « less- NSF-PAR ID:
- 10368357
- Publisher / Repository:
- DOI PREFIX: 10.1029
- Date Published:
- Journal Name:
- Journal of Advances in Modeling Earth Systems
- Volume:
- 14
- Issue:
- 5
- ISSN:
- 1942-2466
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Predicting Pacific Decadal Oscillation (PDO) transitions and understanding the associated mechanisms has proven a critical but challenging task in climate science. As a form of decadal variability, the PDO is associated with both large‐scale climate shifts and regional climate predictability. We show that artificial neural networks (ANNs) predict PDO persistence and transitions with lead times of 12 months onward. Using layer‐wise relevance propagation to investigate the ANN predictions, we demonstrate that the ANNs utilize oceanic patterns that have been previously linked to predictable PDO behavior. For PDO transitions, ANNs recognize a build‐up of ocean heat content in the off‐equatorial western Pacific 12–27 months before a transition occurs. The results support the continued use of ANNs in climate studies where explainability tools can assist in mechanistic understanding of the climate system.
-
Machine Learning (ML) models are increasingly used to make critical decisions in real-world applications, yet they have become more complex, making them harder to understand. To this end, researchers have proposed several techniques to explain model predictions. However, practitioners struggle to use these explainability techniques because they often do not know which one to choose and how to interpret the results of the explanations. In this work, we address these challenges by introducing TalkToModel: an interactive dialogue system for explaining machine learning models through conversations. TalkToModel comprises 1) a dialogue engine that adapts to any tabular model and dataset, understands language, and generates responses, and 2) an execution component that constructs the explanations. In real-world evaluations with humans, 73% of healthcare workers (e.g., doctors and nurses) agreed they would use TalkToModel over baseline point-and-click systems for explainability in a disease prediction task, and 85% of ML professionals agreed TalkToModel was easier to use for computing explanations. Our findings demonstrate that TalkToModel is more effective for model explainability than existing systems, introducing a new category of explainability tools for practitioners.more » « less
-
The Madden–Julian oscillation (MJO) is one of the most important sources of predictability on subseasonal to seasonal (S2S) time scales. Many previous studies have explored the impact of the present state of the MJO on the future evolution and predictability of extratropical weather patterns. What is still unclear, however, is the importance of the accumulated influence of past MJO activity on these results. In this study, the importance of past MJO activity in determining the future state of extratropical circulations is examined by using a linear baroclinic model (LBM) and one of the simplest machine learning algorithms: logistic regression. By increasing the complexity of the logistic regression model with additional information about the past activity of the MJO, it is demonstrated that the past 15 days play a dominant role in determining the state of MJO teleconnections more than 15 days into the future. This conclusion is supported by numerical LBM simulations. It is further shown that the past 15 days of additional information are only important for some MJO phases/lead times and not others, and the physical basis for this result is explored.
-
Explaining machine learning models with interactive natural language conversations using TalkToModel
Abstract Practitioners increasingly use machine learning (ML) models, yet models have become more complex and harder to understand. To understand complex models, researchers have proposed techniques to explain model predictions. However, practitioners struggle to use explainability methods because they do not know which explanation to choose and how to interpret the explanation. Here we address the challenge of using explainability methods by proposing TalkToModel: an interactive dialogue system that explains ML models through natural language conversations. TalkToModel consists of three components: an adaptive dialogue engine that interprets natural language and generates meaningful responses; an execution component that constructs the explanations used in the conversation; and a conversational interface. In real-world evaluations, 73% of healthcare workers agreed they would use TalkToModel over existing systems for understanding a disease prediction model, and 85% of ML professionals agreed TalkToModel was easier to use, demonstrating that TalkToModel is highly effective for model explainability.