skip to main content


Title: Multivariate Deep Causal Network for Time Series Forecasting in Interdependent Networks
A novel multivariate deep causal network model (MDCN) is proposed in this paper, which combines the theory of conditional variance and deep neural networks to identify the cause-effect relationship between different interdependent time-series. The MCDN validation is conducted by a double step approach. The self validation is performed by information theory - based metrics, and the cross validation is achieved by a foresting application that combines the actual interdependent electricity, transportation, and weather datasets in the City of Tallahassee, Florida, USA.  more » « less
Award ID(s):
1640587
NSF-PAR ID:
10091760
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
2018 IEEE Conference on Decision and Control (CDC)
Page Range / eLocation ID:
6476 to 6481
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to accurately quantify dielectrophoretic (DEP) force is critical in the development of high-efficiency microfluidic systems. This is the first reported work that combines a textile electrode-based DEP sensing system with deep learning in order to estimate the DEP forces invoked on microparticles. We demonstrate how our deep learning model can process micrographs of pearl chains of polystyrene (PS) microbeads to estimate the DEP forces experienced. Numerous images obtained from our experiments at varying input voltages were preprocessed and used to train three deep convolutional neural networks, namely AlexNet, MobileNetV2, and VGG19. The performances of all the models was tested for their validation accuracies. Models were also tested with adversarial images to evaluate performance in terms of classification accuracy and resilience as a result of noise, image blur, and contrast changes. The results indicated that our method is robust under unfavorable real-world settings, demonstrating that it can be used for the direct estimation of dielectrophoretic force in point-of-care settings. 
    more » « less
  2. Abstract

    Measuring the demographic parameters of exploited populations is central to predicting their vulnerability and extinction risk. However, current rates of population decline and species loss greatly outpace our ability to empirically monitor all populations that are potentially threatened.

    The scale of this problem cannot be addressed through additional data collection alone, and therefore it is a common practice to conduct population assessments based on surrogate data collected from similar species. However, this approach introduces biases and imprecisions that are difficult to quantify. Recent developments in hierarchical modelling have enabled missing values to be reconstructed based on the correlations between available life‐history data, linking similar species based on phylogeny and environmental conditions.

    However, these methods cannot resolve life‐history variability among populations or species that are closely placed spatially or taxonomically. Here, theoretically motivated constraints that align with life‐history theory offer a new avenue for addressing this problem. We describe a Bayesian hierarchical approach that combines fragmented, multispecies and multi‐population data with established life‐history theory, in order to objectively determine similarity between populations based on trait correlations (life‐history trade‐offs) obtained from model fitting.

    We reconstruct 59 unobserved life‐history parameters for 23 populations of tuna that sustain some of the world's most valuable fisheries. Testing by cross‐validation across different scenarios indicated that life‐histories were accurately reconstructed when information was available for other populations of the same species. The reconstruction of several traits was also accurate for species represented by a single population, although credible intervals increased dramatically.

    Synthesis and applications. The described Bayesian hierarchical method provides access to life‐history traits that are difficult to measure directly and reconstructs missing life‐history information useful for assessing populations and species that are directly or indirectly affected by human exploitation of natural resources. The method is particularly useful for examining populations that are spatially or taxonomically similar, and the reconstructed life‐history strategies described for the principal market tunas have immediate application to the world‐wide management of these fisheries.

     
    more » « less
  3. While deep learning is successful in a number of applications, it is not yet well understood theoretically. A theoretical characterization of deep learning should answer questions about their approximation power, the dynamics of optimization, and good out-of-sample performance, despite overparameterization and the absence of explicit regularization. We review our recent results toward this goal. In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost. However, we proved that for certain types of compositional functions, deep networks of the convolutional type (even without weight sharing) can avoid the curse of dimensionality. In characterizing minimization of the empirical exponential loss we consider the gradient flow of the weight directions rather than the weights themselves, since the relevant function underlying classification corresponds to normalized networks. The dynamics of normalized weights turn out to be equivalent to those of the constrained problem of minimizing the loss subject to a unit norm constraint. In particular, the dynamics of typical gradient descent have the same critical points as the constrained problem. Thus there is implicit regularization in training deep networks under exponential-type loss functions during gradient flow. As a consequence, the critical points correspond to minimum norm infima of the loss. This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby the expected error. Thus our results imply that gradient descent in deep networks minimize the expected error.

     
    more » « less
  4. Alloying is a common technique to optimize the functional properties of materials for thermoelectrics, photovoltaics, energy storage etc. Designing thermoelectric (TE) alloys is especially challenging because it is a multi-property optimization problem, where the properties that contribute to high TE performance are interdependent. In this work, we develop a computational framework that combines first-principles calculations with alloy and point defect modeling to identify alloy compositions that optimize the electronic, thermal, and defect properties. We apply this framework to design n-type Ba 2(1− x ) Sr 2 x CdP 2 Zintl thermoelectric alloys. Our predictions of the crystallographic properties such as lattice parameters and site disorder are validated with experiments. To optimize the conduction band electronic structure, we perform band unfolding to sketch the effective band structures of alloys and find a range of compositions that facilitate band convergence and minimize alloy scattering of electrons. We assess the n-type dopability of the alloys by extending the standard approach for computing point defect energetics in ordered structures. Through the application of this framework, we identify an optimal alloy composition range with the desired electronic and thermal transport properties, and n-type dopability. Such a computational framework can also be used to design alloys for other functional applications beyond TE. 
    more » « less
  5. Abstract Interdependent critical infrastructures in coastal regions, including transportation, electrical grid, and emergency services, are continually threatened by storm-induced flooding. This has been demonstrated a number of times, most recently by hurricanes such as Harvey and Maria, as well as Sandy and Katrina. The need to protect these infrastructures with robust protection mechanisms is critical for our continued existence along the world’s coastlines. Planning these protections is non-trivial given the rare-event nature of strong storms and climate change manifested through sea level rise. This article proposes a framework for a methodology that combines multiple computational models, stakeholder interviews, and optimization to find an optimal protective strategy over time for critical coastal infrastructure while being constrained by budgetary considerations. 
    more » « less