skip to main content

Title: Multivariate Deep Causal Network for Time Series Forecasting in Interdependent Networks
A novel multivariate deep causal network model (MDCN) is proposed in this paper, which combines the theory of conditional variance and deep neural networks to identify the cause-effect relationship between different interdependent time-series. The MCDN validation is conducted by a double step approach. The self validation is performed by information theory - based metrics, and the cross validation is achieved by a foresting application that combines the actual interdependent electricity, transportation, and weather datasets in the City of Tallahassee, Florida, USA.
Authors:
; ; ; ;
Award ID(s):
1640587
Publication Date:
NSF-PAR ID:
10091760
Journal Name:
2018 IEEE Conference on Decision and Control (CDC)
Page Range or eLocation-ID:
6476 to 6481
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to accurately quantify dielectrophoretic (DEP) force is critical in the development of high-efficiency microfluidic systems. This is the first reported work that combines a textile electrode-based DEP sensing system with deep learning in order to estimate the DEP forces invoked on microparticles. We demonstrate how our deep learning model can process micrographs of pearl chains of polystyrene (PS) microbeads to estimate the DEP forces experienced. Numerous images obtained from our experiments at varying input voltages were preprocessed and used to train three deep convolutional neural networks, namely AlexNet, MobileNetV2, and VGG19. The performances of all the models was tested for their validation accuracies. Models were also tested with adversarial images to evaluate performance in terms of classification accuracy and resilience as a result of noise, image blur, and contrast changes. The results indicated that our method is robust under unfavorable real-world settings, demonstrating that it can be used for the direct estimation of dielectrophoretic force in point-of-care settings.
  2. While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum normmore »infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.

    « less
  3. Properties in material composition and crystal structures have been explored by density functional theory (DFT) calculations, using databases such as the Open Quantum Materials Database (OQMD). Databases like these have been used currently for the training of advanced machine learning and deep neural network models, the latter providing higher performance when predicting properties of materials. However, current alternatives have shown a deterioration in accuracy when increasing the number of layers in their architecture (over-fitting problem). As an alternative method to address this problem, we have implemented residual neural network architectures based on Merge and Run Networks, IRNet and UNet to improve performance while relaxing the observed network depth limitation. The evaluation of the proposed architectures include a 9:1 ratio to train and test as well as 10 fold cross validation. In the experiments we found that our proposed architectures based on IRNet and UNet are able to obtain a lower Mean Absolute Error (MAE) than current strategies. The full implementation (Python, Tensorflow and Keras) and the trained networks will be available online for community validation and advancing the state of the art from our findings.
  4. Geospatio-temporal data are pervasive across numerous application domains.These rich datasets can be harnessed to predict extreme events such as disease outbreaks, flooding, crime spikes, etc.However, since the extreme events are rare, predicting them is a hard problem. Statistical methods based on extreme value theory provide a systematic way for modeling the distribution of extreme values. In particular, the generalized Pareto distribution (GPD) is useful for modeling the distribution of excess values above a certain threshold. However, applying such methods to large-scale geospatio-temporal data is a challenge due to the difficulty in capturing the complex spatial relationships between extreme events at multiple locations. This paper presents a deep learning framework for long-term prediction of the distribution of extreme values at different locations. We highlight its computational challenges and present a novel framework that combines convolutional neural networks with deep set and GPD. We demonstrate the effectiveness of our approach on a real-world dataset for modeling extreme climate events.
  5. Abstract Interdependent critical infrastructures in coastal regions, including transportation, electrical grid, and emergency services, are continually threatened by storm-induced flooding. This has been demonstrated a number of times, most recently by hurricanes such as Harvey and Maria, as well as Sandy and Katrina. The need to protect these infrastructures with robust protection mechanisms is critical for our continued existence along the world’s coastlines. Planning these protections is non-trivial given the rare-event nature of strong storms and climate change manifested through sea level rise. This article proposes a framework for a methodology that combines multiple computational models, stakeholder interviews, and optimization to find an optimal protective strategy over time for critical coastal infrastructure while being constrained by budgetary considerations.