skip to main content


Title: Efficient identification for modeling high-dimensional brain dynamics
System identification poses a significant bottleneck to characterizing and controlling complex systems. This challenge is greatest when both the system states and parameters are not directly accessible, leading to a dual-estimation problem. Current approaches to such problems are limited in their ability to scale with many-parameter systems, as often occurs in networks. In the current work, we present a new, computationally efficient approach to treat large dual-estimation problems. In this work, we derive analytic back-propagated gradients for the Prediction Error Method which enables efficient and accurate identification of large systems. The PEM approach consists of directly integrating state estimation into a dual-optimization objective, leaving a differentiable cost/error function only in terms of the unknown system parameters, which we solve using numerical gradient/Hessian methods. Intuitively, this approach consists of solving for the parameters that generate the most accurate state estimator (Extended/Cubature Kalman Filter). We demonstrate that this approach is at least as accurate in state and parameter estimation as joint Kalman Filters (Extended/Unscented/Cubature) and Expectation-Maximization, despite lower complexity. We demonstrate the utility of our approach by inverting anatomically-detailed individualized brain models from human magnetoencephalography (MEG) data.  more » « less
Award ID(s):
1835209 1653589
NSF-PAR ID:
10394353
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2022 American Control Conference (ACC)
Page Range / eLocation ID:
1353 to 1358
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Uwe Sauer, Dirk (Ed.)
    A B S T R A C T This paper proposes a model for parameter estimation of Vanadium Redox Flow Battery based on both the electrochemical model and the Equivalent Circuit Model. The equivalent circuit elements are found by a newly proposed optimization to minimized the error between the Thevenin and KVL-based impedance of the equivalent circuit. In contrast to most previously proposed circuit models, which are only introduced for constant current charging, the proposed method is applicable for all charging procedures, i.e., constant current, constant voltage, and constant current-constant voltage charging procedures. The proposed model is verified on a nine-cell VRFB stack by a sample constant current-constant voltage charging. As observed, in constant current charging mode, the terminal voltage model matches the measured data closely with low deviation; however, the terminal voltage model shows discrepancies with the measured data of VRFB in constant voltage charging. To improve the proposed circuit model’s discrepancies in constant voltage mode, two Kalman filters, i.e., hybrid extended Kalman filter and particle filter estimation algorithms, are used in this study. The results show the accuracy of the proposed equivalent with an average deviation of 0.88% for terminal voltage model estimation by the extended KF-based method and the average deviation of 0.79% for the particle filter-based estimation method, while the initial equivalent circuit has an error of 7.21%. Further, the proposed procedure extended to estimate the state of charge of the battery. The results show an average deviation of 4.2% in estimating the battery state of charge using the PF method and 4.4% using the hybrid extended KF method, while the electrochemical SoC estimation method is taken as the reference. These two Kalman Filter based methods are more accurate compared to the average deviation of state of charge using the Coulomb counting method, which is 7.4%. 
    more » « less
  2. Nowadays, the data collected in physical/engineering systems allows various machine learning methods to conduct system monitoring and control, when the physical knowledge on the system edge is limited and challenging to recover completely. Solving such problems typically requires identifying forward system mapping rules, from system states to the output measurements. However, the forward system identification based on digital twin can hardly provide complete monitoring functions, such as state estimation, e.g., to infer the states from measurements. While one can directly learn the inverse mapping rule, it is more desirable to re-utilize the forward digital twin since it is relatively easy to embed physical law there to regularize the inverse process and avoid overfitting. For this purpose, this paper proposes an invertible learning structure based on designing parallel paths in structural neural networks with basis functionals and embedding virtual storage variables for information preservation. For such a two-way digital twin modeling, there is an additional challenge of multiple solutions for system inverse, which contradict the reality of one feasible solution for the current system. To avoid ambiguous inverse, the proposed model maximizes the physical likelihood to contract the original solution space, leading to the unique system operation status of interest. We validate the proposed method on various physical system monitoring tasks and scenarios, such as inverse kinematics problems, power system state estimation, etc. Furthermore, by building a perfect match of a forward-inverse pair, the proposed method obtains accurate and computation-efficient inverse predictions, given observations. Finally, the forward physical interpretation and small prediction errors guarantee the explainability of the invertible structure, compared to standard learning methods. 
    more » « less
  3. Abstract

    This article presents a novel approach to couple a deterministic four‐dimensional variational (4DVAR) assimilation method with the particle filter (PF) ensemble data assimilation system, to produce a robust approach for dual‐state‐parameter estimation. In our proposed method, the Hybrid Ensemble and Variational Data Assimilation framework for Environmental systems (HEAVEN), we characterize the model structural uncertainty in addition to model parameter and input uncertainties. The sequential PF is formulated within the 4DVAR system to design a computationally efficient feedback mechanism throughout the assimilation period. In this framework, the 4DVAR optimization produces the maximum a posteriori estimate of state variables at the beginning of the assimilation window without the need to develop the adjoint of the forecast model. The 4DVAR solution is then perturbed by a newly defined prior error covariance matrix to generate an initial condition ensemble for the PF system to provide more accurate and reliable posterior distributions within the same assimilation window. The prior error covariance matrix is updated from one cycle to another over the main assimilation period to account for model structural uncertainty resulting in an improved estimation of posterior distribution. The premise of the presented approach is that it (1) accounts for all sources of uncertainties involved in hydrologic predictions, (2) uses a small ensemble size, and (3) precludes the particle degeneracy and sample impoverishment. The proposed method is applied on a nonlinear hydrologic model and the effectiveness, robustness, and reliability of the method is demonstrated for several river basins across the United States.

     
    more » « less
  4. Abstract

    Many real-world systems modeled using differential equations involve unknown or uncertain parameters. Standard approaches to address parameter estimation inverse problems in this setting typically focus on estimating constants; yet some unobservable system parameters may vary with time without known evolution models. In this work, we propose a novel approximation method inspired by the Fourier series to estimate time-varying parameters in deterministic dynamical systems modeled with ordinary differential equations. Using ensemble Kalman filtering in conjunction with Fourier series-based approximation models, we detail two possible implementation schemes for sequentially updating the time-varying parameter estimates given noisy observations of the system states. We demonstrate the capabilities of the proposed approach in estimating periodic parameters, both when the period is known and unknown, as well as non-periodic time-varying parameters of different forms with several computed examples using a forced harmonic oscillator. Results emphasize the importance of the frequencies and number of approximation model terms on the time-varying parameter estimates and corresponding dynamical system predictions.

     
    more » « less
  5. Obeid, I. ; Selesnik, I. ; Picone, J. (Ed.)
    The Neuronix high-performance computing cluster allows us to conduct extensive machine learning experiments on big data [1]. This heterogeneous cluster uses innovative scheduling technology, Slurm [2], that manages a network of CPUs and graphics processing units (GPUs). The GPU farm consists of a variety of processors ranging from low-end consumer grade devices such as the Nvidia GTX 970 to higher-end devices such as the GeForce RTX 2080. These GPUs are essential to our research since they allow extremely compute-intensive deep learning tasks to be executed on massive data resources such as the TUH EEG Corpus [2]. We use TensorFlow [3] as the core machine learning library for our deep learning systems, and routinely employ multiple GPUs to accelerate the training process. Reproducible results are essential to machine learning research. Reproducibility in this context means the ability to replicate an existing experiment – performance metrics such as error rates should be identical and floating-point calculations should match closely. Three examples of ways we typically expect an experiment to be replicable are: (1) The same job run on the same processor should produce the same results each time it is run. (2) A job run on a CPU and GPU should produce identical results. (3) A job should produce comparable results if the data is presented in a different order. System optimization requires an ability to directly compare error rates for algorithms evaluated under comparable operating conditions. However, it is a difficult task to exactly reproduce the results for large, complex deep learning systems that often require more than a trillion calculations per experiment [5]. This is a fairly well-known issue and one we will explore in this poster. Researchers must be able to replicate results on a specific data set to establish the integrity of an implementation. They can then use that implementation as a baseline for comparison purposes. A lack of reproducibility makes it very difficult to debug algorithms and validate changes to the system. Equally important, since many results in deep learning research are dependent on the order in which the system is exposed to the data, the specific processors used, and even the order in which those processors are accessed, it becomes a challenging problem to compare two algorithms since each system must be individually optimized for a specific data set or processor. This is extremely time-consuming for algorithm research in which a single run often taxes a computing environment to its limits. Well-known techniques such as cross-validation [5,6] can be used to mitigate these effects, but this is also computationally expensive. These issues are further compounded by the fact that most deep learning algorithms are susceptible to the way computational noise propagates through the system. GPUs are particularly notorious for this because, in a clustered environment, it becomes more difficult to control which processors are used at various points in time. Another equally frustrating issue is that upgrades to the deep learning package, such as the transition from TensorFlow v1.9 to v1.13, can also result in large fluctuations in error rates when re-running the same experiment. Since TensorFlow is constantly updating functions to support GPU use, maintaining an historical archive of experimental results that can be used to calibrate algorithm research is quite a challenge. This makes it very difficult to optimize the system or select the best configurations. The overall impact of all of these issues described above is significant as error rates can fluctuate by as much as 25% due to these types of computational issues. Cross-validation is one technique used to mitigate this, but that is expensive since you need to do multiple runs over the data, which further taxes a computing infrastructure already running at max capacity. GPUs are preferred when training a large network since these systems train at least two orders of magnitude faster than CPUs [7]. Large-scale experiments are simply not feasible without using GPUs. However, there is a tradeoff to gain this performance. Since all our GPUs use the NVIDIA CUDA® Deep Neural Network library (cuDNN) [8], a GPU-accelerated library of primitives for deep neural networks, it adds an element of randomness into the experiment. When a GPU is used to train a network in TensorFlow, it automatically searches for a cuDNN implementation. NVIDIA’s cuDNN implementation provides algorithms that increase the performance and help the model train quicker, but they are non-deterministic algorithms [9,10]. Since our networks have many complex layers, there is no easy way to avoid this randomness. Instead of comparing each epoch, we compare the average performance of the experiment because it gives us a hint of how our model is performing per experiment, and if the changes we make are efficient. In this poster, we will discuss a variety of issues related to reproducibility and introduce ways we mitigate these effects. For example, TensorFlow uses a random number generator (RNG) which is not seeded by default. TensorFlow determines the initialization point and how certain functions execute using the RNG. The solution for this is seeding all the necessary components before training the model. This forces TensorFlow to use the same initialization point and sets how certain layers work (e.g., dropout layers). However, seeding all the RNGs will not guarantee a controlled experiment. Other variables can affect the outcome of the experiment such as training using GPUs, allowing multi-threading on CPUs, using certain layers, etc. To mitigate our problems with reproducibility, we first make sure that the data is processed in the same order during training. Therefore, we save the data from the last experiment and to make sure the newer experiment follows the same order. If we allow the data to be shuffled, it can affect the performance due to how the model was exposed to the data. We also specify the float data type to be 32-bit since Python defaults to 64-bit. We try to avoid using 64-bit precision because the numbers produced by a GPU can vary significantly depending on the GPU architecture [11-13]. Controlling precision somewhat reduces differences due to computational noise even though technically it increases the amount of computational noise. We are currently developing more advanced techniques for preserving the efficiency of our training process while also maintaining the ability to reproduce models. In our poster presentation we will demonstrate these issues using some novel visualization tools, present several examples of the extent to which these issues influence research results on electroencephalography (EEG) and digital pathology experiments and introduce new ways to manage such computational issues. 
    more » « less