- Award ID(s):
- 1637258
- PAR ID:
- 10075442
- Date Published:
- Journal Name:
- 2017 North American Power Symposium (NAPS)
- Page Range / eLocation ID:
- 1 to 6
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Time series forecasting with additional spatial information has attracted a tremendous amount of attention in recent research, due to its importance in various real-world applications on social studies, such as conflict prediction and pandemic forecasting. Conventional machine learning methods either consider temporal dependencies only, or treat spatial and temporal relations as two separate autoregressive models, namely, space-time autoregressive models. Such methods suffer when it comes to long-term forecasting or predictions for large-scale areas, due to the high nonlinearity and complexity of spatio-temporal data. In this paper, we propose to address these challenges using spatio-temporal graph neural networks. Empirical results on Violence Early Warning System (ViEWS) dataset and U.S. Covid-19 dataset indicate that our method significantly improved performance over the baseline approaches.more » « less
-
A Bayesian lattice filtering and smoothing approach is proposed for fast and accurate modeling and inference in multivariate non‐stationary time series. This approach offers computational feasibility and interpretable time‐frequency analysis in the multivariate context. The proposed framework allows us to obtain posterior estimates of the time‐varying spectral densities of individual time series components, as well as posterior measurements of the time‐frequency relationships across multiple components, such as time‐varying coherence and partial coherence. The proposed formulation considers multivariate dynamic linear models (MDLMs) on the forward and backward time‐varying partial autocorrelation coefficients (TV‐VPARCOR). Computationally expensive schemes for posterior inference on the multivariate dynamic PARCOR model are avoided using approximations in the MDLM context. Approximate inference on the corresponding time‐varying vector autoregressive (TV‐VAR) coefficients is obtained via Whittle's algorithm. A key aspect of the proposed TV‐VPARCOR representations is that they are of lower dimension, and therefore more efficient, than TV‐VAR representations. The performance of the TV‐VPARCOR models is illustrated in simulation studies and in the analysis of multivariate non‐stationary temporal data arising in neuroscience and environmental applications. Model performance is evaluated using goodness‐of‐fit measurements in the time‐frequency domain and also by assessing the quality of short‐term forecasting.
-
Forecasting time series data is an important subject in economics, business, and finance. Traditionally, there are several techniques to effectively forecast the next lag of time series data such as univariate Autoregressive (AR), univariate Moving Average (MA), Simple Exponential Smoothing (SES), and more notably Autoregressive Integrated Moving Average (ARIMA) with its many variations. In particular, ARIMA model has demonstrated its outperformance in precision and accuracy of predicting the next lags of time series. With the recent advancement in computational power of computers and more importantly development of more advanced machine learning algorithms and approaches such as deep learning, new algorithms are developed to analyze and forecast time series data. The research question investigated in this article is that whether and how the newly developed deep learning-based algorithms for forecasting time series data, such as “Long Short-Term Memory (LSTM)”, are superior to the traditional algorithms. The empirical studies conducted and reported in this article show that deep learning-based algorithms such as LSTM outperform traditional-based algorithms such as ARIMA model. More specifically, the average reduction in error rates obtained by LSTM was between 84 - 87 percent when compared to ARIMA indicating the superiority of LSTM to ARIMA. Furthermore, it was noticed that the number of training times, known as “epoch” in deep learning, had no effect on the performance of the trained forecast model and it exhibited a truly random behavior.more » « less
-
As a decisive part in the success of Mobility-as-a-Service (MaaS), spatio-temporal dynamics modeling on mobility networks is a challenging task particularly considering scenarios where open-world events drive mobility behavior deviated from the routines. While tremendous progress has been made to model high-level spatio-temporal regularities with deep learning, most, if not all of the existing methods are neither aware of the dynamic interactions among multiple transport modes on mobility networks, nor adaptive to unprecedented volatility brought by potential open-world events. In this paper, we are therefore motivated to improve the canonical spatio-temporal network (ST-Net) from two perspectives: (1) design a heterogeneous mobility information network (HMIN) to explicitly represent intermodality in multimodal mobility; (2) propose a memory-augmented dynamic filter generator (MDFG) to generate sequence-specific parameters in an on-the-fly fashion for various scenarios. The enhanced event-aware spatio-temporal network, namely EAST-Net, is evaluated on several real-world datasets with a wide variety and coverage of open-world events. Both quantitative and qualitative experimental results verify the superiority of our approach compared with the state-of-the-art baselines. What is more, experiments show generalization ability of EAST-Net to perform zero-shot inference over different open-world events that have not been seen.more » « less
-
Abstract This paper presents Granger mediation analysis, a new framework for causal mediation analysis of multiple time series. This framework is motivated by a functional magnetic resonance imaging (fMRI) experiment where we are interested in estimating the mediation effects between a randomized stimulus time series and brain activity time series from two brain regions. The independent observation assumption is thus unrealistic for this type of time-series data. To address this challenge, our framework integrates two types of models: causal mediation analysis across the mediation variables, and vector autoregressive (VAR) models across the temporal observations. We use “Granger” to refer to VAR correlations modeled in this paper. We further extend this framework to handle multilevel data, in order to model individual variability and correlated errors between the mediator and the outcome variables. Using Rubin's potential outcome framework, we show that the causal mediation effects are identifiable under our time-series model. We further develop computationally efficient algorithms to maximize our likelihood-based estimation criteria. Simulation studies show that our method reduces the estimation bias and improves statistical power, compared with existing approaches. On a real fMRI data set, our approach quantifies the causal effects through a brain pathway, while capturing the dynamic dependence between two brain regions.