skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, November 14 until 2:00 AM ET on Saturday, November 15 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on October 1, 2026

Title: Multi-scale dynamic spatiotemporal graph attention network for forecasting karst spring discharge
Karst aquifers are important groundwater resources that supply drinking water for approximately 25 % of the world’s population. Their complex hydrogeological structures, dual-flow regimes, and highly heterogeneous flow pose significant challenges for accurate hydrodynamic modeling and sustainable management. Traditional modeling approaches often struggle to capture the intricate spatial dependencies and multi-scale temporal patterns inherent in karst systems, particularly the interactions between rapid conduit flow and slower matrix flow. This study proposes a novel multi-scale dynamic graph attention network integrated with long short-term memory model (GAT-LSTM) to innovatively learn and integrate spatial and temporal dependencies in karst systems for forecasting spring discharge. The model introduces several innovative components: (1) graph-based neural networks with dynamic edge-weighting mechanism are proposed to learn and update spatial dependencies based on both geographic distances and learned hydrological relationships, (2) a multi-head attention mechanism is adopted to capture different aspects of spatial relationships simultaneously, and (3) a hierarchical temporal architecture is incorporated to process hydrological temporal patterns at both monthly and seasonal scales with an adaptive fusion mechanism for final results. These features enable the proposed model to effectively account for the dual-flow dynamics in karst systems, where rapid conduit flow and slower matrix flow coexist. The newly proposed model is applied to the Barton Springs of the Edwards Aquifer in Texas. The results demonstrate that it can obtain more accurate and robust prediction performance across various time steps compared to traditional temporal and spatial deep learning approaches. Based on the multi-scale GAT-LSTM model, a comprehensive ablation analysis and permutation feature important are conducted to analyze the relative contribution of various input variables on the final prediction. These findings highlight the intricate nature of karst systems and demonstrate that effective spring discharge prediction requires comprehensive monitoring networks encompassing both primary recharge contributors and supplementary hydrological features that may serve as valuable indicators of system-wide conditions.  more » « less
Award ID(s):
2407963
PAR ID:
10590600
Author(s) / Creator(s):
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Journal of Hydrology
Volume:
659
Issue:
C
ISSN:
0022-1694
Page Range / eLocation ID:
133289
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Karst groundwater is a critical freshwater resource for numerous regions worldwide. Monitoring and predicting karst spring discharge is essential for effective groundwater management and the preservation of karst ecosystems. However, the high heterogeneity and karstification pose significant challenges to physics-based models in providing robust predictions of karst spring discharge. In this study, an interpretable multi-step hybrid deep learning model called selective EEMD-TFT is proposed, which adaptively integrates temporal fusion transformers (TFT) with ensemble empirical mode decomposition (EEMD) for predicting karst spring discharge. The selective EEMD-TFT hybrid model leverages the strengths of both EEMD and TFT techniques to learn inherent patterns and temporal dynamics from nonlinear and nonstationary signals, eliminate redundant components, and emphasize useful characteristics of input variables, leading to the improvement of prediction performance and efficiency. It consists of two stages: in the first stage, the daily precipitation data is decomposed into multiple intrinsic mode functions using EEMD to extract valuable information from nonlinear and nonstationary signals. All decomposed components, temperature and categorical date features are then fed into the TFT model, which is an attention- based deep learning model that combines high-performance multi-horizon prediction and interpretable insights into temporal dynamics. The importance of input variables will be quantified and ranked. In the second stage, the decomposed precipitation components with high importance are selected to serve as the TFT model’s input features along with temperature and categorical date variables for the final prediction. Results indicate that the selective EEMD-TFT model outperforms other sequence-to-sequence deep learning models, such as LSTM and single TFT models, delivering reliable and robust prediction performance. Notably, it maintains more consistent prediction performance at longer forecast horizons compared to other sequence-to-sequence models, highlighting its capacity to learn complex patterns from the input data and efficiently extract valuable information for karst spring prediction. An interpretable analysis of the selective EEMD-TFT model is conducted to gain insights into relationships among various hydrological processes and analyze temporal patterns. 
    more » « less
  2. Abstract Sparse precipitation data in karst catchments challenge hydrologic models to accurately capture the spatial and temporal relationships between precipitation and karst spring discharge, hindering robust predictions. This study addresses this issue by employing a coupled deep learning model that integrates a variation autoencoder (VAE) for augmenting precipitation and a long short‐term memory (LSTM) network for karst spring discharge prediction. The VAE contributes by generating synthetic precipitation data through an encoding‐decoding process. This process generalizes the observed precipitation data by deriving joint latent distributions with improved preservation of temporal and spatial correlations of the data. The combined VAE‐generated precipitation and observation data are used to train and test the LSTM to predict spring discharge. Applied to the Niangziguan spring catchment in northern China, the average performance of NSE, root mean square error, mean absolute error, mean absolute percentage error, and log NSE of our coupled VAE/LSTM model reached 0.93, 0.26, 0.15, 1.8, and 0.92, respectively, yielding 145%, 52%, 63%, 70% and 149% higher than an LSTM model using only observations. We also explored temporal and spatial correlations in the observed data and the impact of different ratios of VAE‐generated precipitation data to actual data on model performances. This study also evaluated the effectiveness of VAE‐augmented data on various deep‐learning models and compared VAE with other data augmentation techniques. We demonstrate that the VAE offers a novel approach to address data scarcity and uncertainty, improving learning generalization and predictive capability of various hydrological models. However, we recognize that innovations to address hydrologic problems at different scales remain to be explored. 
    more » « less
  3. Abstract In many regions globally, snowmelt‐recharged mountainous karst aquifers serve as crucial sources for municipal and agricultural water supplies. In these watersheds, complex interplay of meteorological, topographical, and hydrogeological factors leads to intricate recharge‐discharge pathways. This study introduces a spatially distributed deep learning precipitation‐runoff model that combines Convolutional Long Short‐Term Memory (ConvLSTM) with a spatial attention mechanism. The effectiveness of the deep learning model was evaluated using data from the Logan River watershed and subwatersheds, a characteristically karst‐dominated hydrological system in northern Utah. Compared to the ConvLSTM baseline, the inclusion of a spatial attention mechanism improved performance for simulating discharge at the watershed outlet. Analysis of attention weights in the trained model unveiled distinct areas contributing the most to discharge under snowmelt and recession conditions. Furthermore, fine‐tuning the model at subwatershed scales provided insights into cross‐subwatershed subsurface connectivity. These findings align with results obtained from detailed hydrogeochemical tracer studies. Results highlight the potential of the proposed deep learning approach to unravel the complexities of karst aquifer systems, offering valuable insights for water resource management under future climate conditions. Furthermore, results suggest that the proposed explainable, spatially distributed, deep learning approach to hydrologic modeling holds promise for non‐karstic watersheds. 
    more » « less
  4. Advancements in robotics and AI have increased the demand for interactive robots in healthcare and assistive applications. However, ensuring safe and effective physical human-robot interactions (pHRIs) remains challenging due to the complexities of human motor communication and intent recognition. Traditional physics-based models struggle to capture the dynamic nature of human force interactions, limiting robotic adaptability. To address these limitations, neural networks (NNs) have been explored for force-movement intention prediction. While multi-layer perceptron (MLP) networks show potential, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks effectively model sequential dependencies, while Convolutional Neural Networks (CNNs) enhance spatial feature extraction from human force data. Building on these strengths, this study introduces a hybrid LSTM-CNN framework to improve force-movement intention prediction, increasing accuracy from 69% to 86% through effective denoising and advanced architectures. The combined CNN-LSTM network proved particularly effective in handling individualized force-velocity relationships and presents a generalizable model paving the way for more adaptive strategies in robot guidance. These findings highlight the importance of integrating spatial and temporal modeling to enhance robot precision, responsiveness, and human-robot collaboration. Index Terms —- Physical Human-Robot Interaction, Intention Detection, Machine Learning, Long-Short Term Memory (LSTM) 
    more » « less
  5. Accurate cancer subtype prediction is crucial for personalized medicine. Integrating multi-omics data represents a viable approach to comprehending the intricate pathophysiology of complex diseases like cancer. Conventional machine learning techniques are not ideal for analyzing the complex interrelationships among different categories of omics data. Numerous models have been suggested using graph-based learning to uncover veiled representations and network formations unique to distinct types of omics data to heighten predictions regarding cancers and characterize patients’ profiles, amongst other applications aimed at improving disease management in medical research. The existing graph-based state-of-the-art multi-omics integration approaches for cancer subtype prediction, MOGONET, and SUPREME, use a graph convolutional network (GCN), which fails to consider the level of importance of neighboring nodes on a particular node. To address this gap, we hypothesize that paying attention to each neighbor or providing appropriate weights to neighbors based on their importance might improve the cancer subtype prediction. The natural choice to determine the importance of each neighbor of a node in a graph is to explore the graph attention network (GAT). Here, we propose MOGAT, a novel multi-omics integration approach, leveraging GAT models that incorporate graph-based learning with an attention mechanism. MOGAT utilizes a multi-head attention mechanism to extract appropriate information for a specific sample by assigning unique attention coefficients to neighboring samples. Based on our knowledge, our group is the first to explore GAT in multi-omics integration for cancer subtype prediction. To evaluate the performance of MOGAT in predicting cancer subtypes, we explored two sets of breast cancer data from TCGA and METABRIC. Our proposed approach, MOGAT, outperforms MOGONET by 32% to 46% and SUPREME by 2% to 16% in cancer subtype prediction in different scenarios, supporting our hypothesis. Our results also showed that GAT embeddings provide a better prognosis in differentiating the high-risk group from the low-risk group than raw features. 
    more » « less