skip to main content


Title: Discrete Graph Structure Learning for Forecasting Multiple Time Series
Time series forecasting is an extensively studied subject in statistics, economics, and computer science. Exploration of the correlation and causation among the variables in a multivariate time series shows promise in enhancing the performance of a time series model. When using deep neural networks as forecasting models, we hypothesize that exploiting the pairwise information among multiple (multivariate) time series also improves their forecast. If an explicit graph structure is known, graph neural networks (GNNs) have been demonstrated as powerful tools to exploit the structure. In this work, we propose learning the structure simultaneously with the GNN if the graph is unknown. We cast the problem as learning a probabilistic graph model through optimizing the mean performance over the graph distribution. The distribution is parameterized by a neural network so that discrete graphs can be sampled differentiably through reparameterization. Empirical evaluations show that our method is simpler, more efficient, and better performing than a recently proposed bilevel learning approach for graph structure learning, as well as a broad array of forecasting models, either deep or non-deep learning based, and graph or non-graph based.  more » « less
Award ID(s):
1718738
NSF-PAR ID:
10253603
Author(s) / Creator(s):
Date Published:
Journal Name:
Proceedings of International Conference on Learning Representations
Page Range / eLocation ID:
1-14
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Few-shot machine learning attempts to predict outputs given only a very small number of training examples. The key idea behind most few-shot learning approaches is to pre-train the model with a large number of instances from a different but related class of data, classes for which a large number of instances are available for training. Few-shot learning has been most successfully demonstrated for classification problems using Siamese deep learning neural networks. Few-shot learning is less extensively applied to time-series forecasting. Few-shot forecasting is the task of predicting future values of a time-series even when only a small set of historic time-series is available. Few-shot forecasting has applications in domains where a long history of data is not available. This work describes deep neural network architectures for few-shot forecasting. All the architectures use a Siamese twin network approach to learn a difference function between pairs of time-series, rather than directly forecasting based on historical data as seen in traditional forecasting models. The networks are built using Long short-term memory units (LSTM). During forecasting, a model is able to forecast time-series types that were never seen in the training data by using the few available instances of the new time-series type as reference inputs. The proposed architectures are evaluated on Vehicular traffic data collected in California from the Caltrans Performance Measurement System (PeMS). The models were trained with traffic flow data collected at specific locations and then are evaluated by predicting traffic at different locations at different time horizons (0 to 12 hours). The Mean Absolute Error (MAE) was used as the evaluation metric and also as the loss function for training. The proposed architectures show lower prediction error than a baseline nearest neighbor forecast model. The prediction error increases at longer time horizons. 
    more » « less
  2. Table of Contents: Foreword by the CI 2016 Workshop Chairs …………………………………vi Foreword by the CI 2016 Steering Committee ..…………………………..…..viii List of Organizing Committee ………………………….……....x List of Registered Participants .………………………….……..xi Acknowledgement of Sponsors ……………………………..…xiv Hackathon and Workshop Agenda .………………………………..xv Hackathon Summary .………………………….…..xviii Invited talks - abstracts and links to presentations ………………………………..xxi Proceedings: 34 short research papers ……………………………….. 1-135 Papers 1. BAYESIAN MODELS FOR CLIMATE RECONSTRUCTION FROM POLLEN RECORDS ..................................... 1 Lasse Holmström, Liisa Ilvonen, Heikki Seppä, Siim Veski 2. ON INFORMATION CRITERIA FOR DYNAMIC SPATIO-TEMPORAL CLUSTERING ..................................... 5 Ethan D. Schaeffer, Jeremy M. Testa, Yulia R. Gel, Vyacheslav Lyubchich 3. DETECTING MULTIVARIATE BIOSPHERE EXTREMES ..................................... 9 Yanira Guanche García, Erik Rodner, Milan Flach, Sebastian Sippel, Miguel Mahecha, Joachim Denzler 4. SPATIO-TEMPORAL GENERATIVE MODELS FOR RAINFALL OVER INDIA ..................................... 13 Adway Mitra 5. A NONPARAMETRIC COPULA BASED BIAS CORRECTION METHOD FOR STATISTICAL DOWNSCALING ..................................... 17 Yi Li, Adam Ding, Jennifer Dy 6. DETECTING AND PREDICTING BEAUTIFUL SUNSETS USING SOCIAL MEDIA DATA ..................................... 21 Emma Pierson 7. OCEANTEA: EXPLORING OCEAN-DERIVED CLIMATE DATA USING MICROSERVICES ..................................... 25 Arne N. Johanson, Sascha Flögel, Wolf-Christian Dullo, Wilhelm Hasselbring 8. IMPROVED ANALYSIS OF EARTH SYSTEM MODELS AND OBSERVATIONS USING SIMPLE CLIMATE MODELS ..................................... 29 Balu Nadiga, Nathan Urban 9. SYNERGY AND ANALOGY BETWEEN 15 YEARS OF MICROWAVE SST AND ALONG-TRACK SSH ..................................... 33 Pierre Tandeo, Aitor Atencia, Cristina Gonzalez-Haro 10. PREDICTING EXECUTION TIME OF CLIMATE-DRIVEN ECOLOGICAL FORECASTING MODELS ..................................... 37 Scott Farley and John W. Williams 11. SPATIOTEMPORAL ANALYSIS OF SEASONAL PRECIPITATION OVER US USING CO-CLUSTERING ..................................... 41 Mohammad Gorji–Sefidmazgi, Clayton T. Morrison 12. PREDICTION OF EXTREME RAINFALL USING HYBRID CONVOLUTIONAL-LONG SHORT TERM MEMORY NETWORKS ..................................... 45 Sulagna Gope, Sudeshna Sarkar, Pabitra Mitra 13. SPATIOTEMPORAL PATTERN EXTRACTION WITH DATA-DRIVEN KOOPMAN OPERATORS FOR CONVECTIVELY COUPLED EQUATORIAL WAVES ..................................... 49 Joanna Slawinska, Dimitrios Giannakis 14. COVARIANCE STRUCTURE ANALYSIS OF CLIMATE MODEL OUTPUT ..................................... 53 Chintan Dalal, Doug Nychka, Claudia Tebaldi 15. SIMPLE AND EFFICIENT TENSOR REGRESSION FOR SPATIOTEMPORAL FORECASTING ..................................... 57 Rose Yu, Yan Liu 16. TRACKING OF TROPICAL INTRASEASONAL CONVECTIVE ANOMALIES ..................................... 61 Bohar Singh, James L. Kinter 17. ANALYSIS OF AMAZON DROUGHTS USING SUPERVISED KERNEL PRINCIPAL COMPONENT ANALYSIS ..................................... 65 Carlos H. R. Lima, Amir AghaKouchak 18. A BAYESIAN PREDICTIVE ANALYSIS OF DAILY PRECIPITATION DATA ..................................... 69 Sai K. Popuri, Nagaraj K. Neerchal, Amita Mehta 19. INCORPORATING PRIOR KNOWLEDGE IN SPATIO-TEMPORAL NEURAL NETWORK FOR CLIMATIC DATA ..................................... 73 Arthur Pajot, Ali Ziat, Ludovic Denoyer, Patrick Gallinari 20. DIMENSIONALITY-REDUCTION OF CLIMATE DATA USING DEEP AUTOENCODERS ..................................... 77 Juan A. Saenz, Nicholas Lubbers, Nathan M. Urban 21. MAPPING PLANTATION IN INDONESIA ..................................... 81 Xiaowei Jia, Ankush Khandelwal, James Gerber, Kimberly Carlson, Paul West, Vipin Kumar 22. FROM CLIMATE DATA TO A WEIGHTED NETWORK BETWEEN FUNCTIONAL DOMAINS ..................................... 85 Ilias Fountalis, Annalisa Bracco, Bistra Dilkina, Constantine Dovrolis 23. EMPLOYING SOFTWARE ENGINEERING PRINCIPLES TO ENHANCE MANAGEMENT OF CLIMATOLOGICAL DATASETS FOR CORAL REEF ANALYSIS ..................................... 89 Mark Jenne, M.M. Dalkilic, Claudia Johnson 24. Profiler Guided Manual Optimization for Accelerating Cholesky Decomposition on R Environment ..................................... 93 V.B. Ramakrishnaiah, R.P. Kumar, J. Paige, D. Hammerling, D. Nychka 25. GLOBAL MONITORING OF SURFACE WATER EXTENT DYNAMICS USING SATELLITE DATA ..................................... 97 Anuj Karpatne, Ankush Khandelwal and Vipin Kumar 26. TOWARD QUANTIFYING TROPICAL CYCLONE RISK USING DIAGNOSTIC INDICES .................................... 101 Erica M. Staehling and Ryan E. Truchelut 27. OPTIMAL TROPICAL CYCLONE INTENSITY ESTIMATES WITH UNCERTAINTY FROM BEST TRACK DATA .................................... 105 Suz Tolwinski-Ward 28. EXTREME WEATHER PATTERN DETECTION USING DEEP CONVOLUTIONAL NEURAL NETWORK .................................... 109 Yunjie Liu, Evan Racah, Prabhat, Amir Khosrowshahi, David Lavers, Kenneth Kunkel, Michael Wehner, William Collins 29. INFORMATION TRANSFER ACROSS TEMPORAL SCALES IN ATMOSPHERIC DYNAMICS .................................... 113 Nikola Jajcay and Milan Paluš 30. Identifying precipitation regimes in China using model-based clustering of spatial functional data .................................... 117 Haozhe Zhang, Zhengyuan Zhu, Shuiqing Yin 31. RELATIONAL RECURRENT NEURAL NETWORKS FOR SPATIOTEMPORAL INTERPOLATION FROM MULTI-RESOLUTION CLIMATE DATA .................................... 121 Guangyu Li, Yan Liu 32. OBJECTIVE SELECTION OF ENSEMBLE BOUNDARY CONDITIONS FOR CLIMATE DOWNSCALING .................................... 124 Andrew Rhines, Naomi Goldenson 33. LONG-LEAD PREDICTION OF EXTREME PRECIPITATION CLUSTER VIA A SPATIO-TEMPORAL CONVOLUTIONAL NEURAL NETWORK .................................... 128 Yong Zhuang, Wei Ding 34. MULTIPLE INSTANCE LEARNING FOR BURNED AREA MAPPING USING MULTI –TEMPORAL REFLECTANCE DATA .................................... 132 Guruprasad Nayak, Varun Mithal, Vipin Kumar 
    more » « less
  3. null (Ed.)
    Rank position forecasting in car racing is a challenging problem when using a Deep Learning-based model over timeseries data. It is featured with highly complex global dependency among the racing cars, with uncertainty resulted from existing and external factors; and it is also a problem with data scarcity. Existing methods, including statistical models, machine learning regression models, and several state-of-the-art deep forecasting models all perform not well on this problem. By an elaborate analysis of pit stop events, we find it critical to decompose the cause-and-effect relationship and model the rank position and pit stop events separately. In choosing a sub-model from different neural network models, we find the model with weak assumptions on the global dependency structure performs the best. Based on these observations, we propose RankNet, a combination of the encoder-decoder network and a separate Multilayer Perception network that is capable of delivering probabilistic forecasting to model the pit stop events and rank position in car racing. Further with the help of feature optimizations, RankNet demonstrates a significant performance improvement, where MAE improves 19% in two laps forecasting task and 7% in the stint forecasting task over the best baseline and is also more stable when adapting to unseen new data. Details of the model optimizations and performance profiling are presented. It is promising to provide useful interactions of neural networks in forecasting racing cars and shine a light on solutions to similar challenging issues in general forecasting problems. 
    more » « less
  4. Graph structured data are abundant in the real world. Among different graph types, directed acyclic graphs (DAGs) are of particular interest to machine learning researchers, as many machine learning models are realized as computations on DAGs, including neural networks and Bayesian networks. In this paper, we study deep generative models for DAGs, and propose a novel DAG variational autoencoder (D-VAE). To encode DAGs into the latent space, we leverage graph neural networks. We propose an asynchronous message passing scheme that allows encoding the computations on DAGs, rather than using existing simultaneous message passing schemes to encode local graph structures. We demonstrate the effectiveness of our proposed DVAE through two tasks: neural architecture search and Bayesian network structure learning. Experiments show that our model not only generates novel and valid DAGs, but also produces a smooth latent space that facilitates searching for DAGs with better performance through Bayesian optimization. 
    more » « less
  5. Solar flare prediction is a central problem in space weather forecasting and has captivated the attention of a wide spectrum of researchers due to recent advances in both remote sensing as well as machine learning and deep learning approaches. The experimental findings based on both machine and deep learning models reveal significant performance improvements for task specific datasets. Along with building models, the practice of deploying such models to production environments under operational settings is a more complex and often time-consuming process which is often not addressed directly in research settings. We present a set of new heuristic approaches to train and deploy an operational solar flare prediction system for ≥M1.0-class flares with two prediction modes: full-disk and active region-based. In full-disk mode, predictions are performed on full-disk line-of-sight magnetograms using deep learning models whereas in active region-based models, predictions are issued for each active region individually using multivariate time series data instances. The outputs from individual active region forecasts and full-disk predictors are combined to a final full-disk prediction result with a meta-model. We utilized an equal weighted average ensemble of two base learners’ flare probabilities as our baseline meta learner and improved the capabilities of our two base learners by training a logistic regression model. The major findings of this study are: 1) We successfully coupled two heterogeneous flare prediction models trained with different datasets and model architecture to predict a full-disk flare probability for next 24 h, 2) Our proposed ensembling model, i.e., logistic regression, improves on the predictive performance of two base learners and the baseline meta learner measured in terms of two widely used metrics True Skill Statistic (TSS) and Heidke Skill Score (HSS), and 3) Our result analysis suggests that the logistic regression-based ensemble (Meta-FP) improves on the full-disk model (base learner) by ∼9% in terms TSS and ∼10% in terms of HSS. Similarly, it improves on the AR-based model (base learner) by ∼17% and ∼20% in terms of TSS and HSS respectively. Finally, when compared to the baseline meta model, it improves on TSS by ∼10% and HSS by ∼15%. 
    more » « less