Karst aquifers are important groundwater resources that supply drinking water for approximately 25 % of the world’s population. Their complex hydrogeological structures, dual-flow regimes, and highly heterogeneous flow pose significant challenges for accurate hydrodynamic modeling and sustainable management. Traditional modeling approaches often struggle to capture the intricate spatial dependencies and multi-scale temporal patterns inherent in karst systems, particularly the interactions between rapid conduit flow and slower matrix flow. This study proposes a novel multi-scale dynamic graph attention network integrated with long short-term memory model (GAT-LSTM) to innovatively learn and integrate spatial and temporal dependencies in karst systems for forecasting spring discharge. The model introduces several innovative components: (1) graph-based neural networks with dynamic edge-weighting mechanism are proposed to learn and update spatial dependencies based on both geographic distances and learned hydrological relationships, (2) a multi-head attention mechanism is adopted to capture different aspects of spatial relationships simultaneously, and (3) a hierarchical temporal architecture is incorporated to process hydrological temporal patterns at both monthly and seasonal scales with an adaptive fusion mechanism for final results. These features enable the proposed model to effectively account for the dual-flow dynamics in karst systems, where rapid conduit flow and slower matrix flow coexist. The newly proposed model is applied to the Barton Springs of the Edwards Aquifer in Texas. The results demonstrate that it can obtain more accurate and robust prediction performance across various time steps compared to traditional temporal and spatial deep learning approaches. Based on the multi-scale GAT-LSTM model, a comprehensive ablation analysis and permutation feature important are conducted to analyze the relative contribution of various input variables on the final prediction. These findings highlight the intricate nature of karst systems and demonstrate that effective spring discharge prediction requires comprehensive monitoring networks encompassing both primary recharge contributors and supplementary hydrological features that may serve as valuable indicators of system-wide conditions.
more »
« less
This content will become publicly available on August 18, 2026
Comparison of CNN and LSTM Networks on Human Intention Prediction in Physical Human-Robot Interactions
Advancements in robotics and AI have increased the demand for interactive robots in healthcare and assistive applications. However, ensuring safe and effective physical human-robot interactions (pHRIs) remains challenging due to the complexities of human motor communication and intent recognition. Traditional physics-based models struggle to capture the dynamic nature of human force interactions, limiting robotic adaptability. To address these limitations, neural networks (NNs) have been explored for force-movement intention prediction. While multi-layer perceptron (MLP) networks show potential, they struggle with temporal dependencies and generalization. Long Short-Term Memory (LSTM) networks effectively model sequential dependencies, while Convolutional Neural Networks (CNNs) enhance spatial feature extraction from human force data. Building on these strengths, this study introduces a hybrid LSTM-CNN framework to improve force-movement intention prediction, increasing accuracy from 69% to 86% through effective denoising and advanced architectures. The combined CNN-LSTM network proved particularly effective in handling individualized force-velocity relationships and presents a generalizable model paving the way for more adaptive strategies in robot guidance. These findings highlight the importance of integrating spatial and temporal modeling to enhance robot precision, responsiveness, and human-robot collaboration. Index Terms —- Physical Human-Robot Interaction, Intention Detection, Machine Learning, Long-Short Term Memory (LSTM)
more »
« less
- Award ID(s):
- 2046552
- PAR ID:
- 10624001
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- The Fool
- ISSN:
- 2160-8075
- Format(s):
- Medium: X
- Location:
- Los Angeles, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Objective . Neural decoding is an important tool in neural engineering and neural data analysis. Of various machine learning algorithms adopted for neural decoding, the recently introduced deep learning is promising to excel. Therefore, we sought to apply deep learning to decode movement trajectories from the activity of motor cortical neurons. Approach . In this paper, we assessed the performance of deep learning methods in three different decoding schemes, concurrent, time-delay, and spatiotemporal. In the concurrent decoding scheme where the input to the network is the neural activity coincidental to the movement, deep learning networks including artificial neural network (ANN) and long-short term memory (LSTM) were applied to decode movement and compared with traditional machine learning algorithms. Both ANN and LSTM were further evaluated in the time-delay decoding scheme in which temporal delays are allowed between neural signals and movements. Lastly, in the spatiotemporal decoding scheme, we trained convolutional neural network (CNN) to extract movement information from images representing the spatial arrangement of neurons, their activity, and connectomes (i.e. the relative strengths of connectivity between neurons) and combined CNN and ANN to develop a hybrid spatiotemporal network. To reveal the input features of the CNN in the hybrid network that deep learning discovered for movement decoding, we performed a sensitivity analysis and identified specific regions in the spatial domain. Main results . Deep learning networks (ANN and LSTM) outperformed traditional machine learning algorithms in the concurrent decoding scheme. The results of ANN and LSTM in the time-delay decoding scheme showed that including neural data from time points preceding movement enabled decoders to perform more robustly when the temporal relationship between the neural activity and movement dynamically changes over time. In the spatiotemporal decoding scheme, the hybrid spatiotemporal network containing the concurrent ANN decoder outperformed single-network concurrent decoders. Significance . Taken together, our study demonstrates that deep learning could become a robust and effective method for the neural decoding of behavior.more » « less
-
Robotic technology can benefit disassembly operations by reducing human operators’ workload and assisting them with handling hazardous materials. Safety consideration and predicting human movement is a priority in human-robot close collaboration. The point-by-point forecasting of human hand motion which forecasts one point at each time does not provide enough information on human movement due to errors between the actual movement and predicted value. This study provides a range of possible hand movements to enhance safety. It applies three machine learning techniques including Long Short-Term Memory (LSTM), Gated Recurrent Unit (GRU), and Bayesian Neural Network (BNN) combined with Bagging and Monte Carlo Dropout (MCD), namely LSTM-Bagging, GRU-Bagging, and BNN-MCD to predict the possible movement range. The study uses an Inertial Measurement Units (IMU) dataset collected from the disassembly of desktop computers to show the application of the proposed method. The findings reveal that BNN-MCD outperforms other models in forecasting the range of possible hand movement.more » « less
-
Significance: The performance of traditional approaches to decoding movement intent from electromyograms (EMGs) and other biological signals commonly degrade over time. Furthermore, conventional algorithms for training neural network-based decoders may not perform well outside the domain of the state transitions observed during training. The work presented in this paper mitigates both these problems, resulting in an approach that has the potential to substantially he quality of live of people with limb loss. Objective: This paper presents and evaluates the performance of four decoding methods for volitional movement intent from intramuscular EMG signals. Methods: The decoders are trained using dataset aggregation (DAgger) algorithm, in which the training data set is augmented during each training iteration based on the decoded estimates from previous iterations. Four competing decoding methods: polynomial Kalman filters (KFs), multilayer perceptron (MLP) networks, convolution neural networks (CNN), and Long-Short Term Memory (LSTM) networks, were developed. The performance of the four decoding methods was evaluated using EMG data sets recorded from two human volunteers with transradial amputation. Short-term analyses, in which the training and cross-validation data came from the same data set, and long-term analyses training and testing were done in different data sets, were performed. Results: Short-term analyses of the decoders demonstrated that CNN and MLP decoders performed significantly better than KF and LSTM decoders, showing an improvement of up to 60% in the normalized mean-square decoding error in cross-validation tests. Long-term analysis indicated that the CNN, MLP and LSTM decoders performed significantly better than KF-based decoder at most analyzed cases of temporal separations (0 to 150 days) between the acquisition of the training and testing data sets. Conclusion: The short-term and long-term performance of MLP and CNN-based decoders trained with DAgger, demonstrated their potential to provide more accurate and naturalistic control of prosthetic hands than alternate approaches.more » « less
-
null (Ed.)Recursive neural networks can be trained to serve as a memory for robots to perform intelligent behaviors when localization is not available. This paper develops an approach to convert a spatial map, represented as a scalar field, into a trained memory represented by the long short-term memory (LSTM) neural network. The trained memory can be retrieved through sensor measurements collected by robots to achieve intelligent behaviors, such as tracking level curves in the map. Memory retrieval does not require robot locations. The retrieved information is combined with sensor measurements through a Kalman filter enabled by the LSTM (LSTM-KF). Furthermore, a level curve tracking control law is designed. Simulation results show that the LSTM-KF and the control law are effective to generate level curve tracking behaviors for single-robot and multi-robot teams.more » « less
An official website of the United States government
