skip to main content


Title: Structure exploiting methods for fast uncertainty quantification in multiphase flow through heterogeneous media
We present a computational framework for dimension reduction and surrogate modeling to accelerate uncertainty quantification in computationally intensive models with high-dimensional inputs and function-valued outputs. Our driving application is multiphase flow in saturated-unsaturated porous media in the context of radioactive waste storage. For fast input dimension reduction, we utilize an approximate global sensitivity measure, for function-valued outputs, motivated by ideas from the active subspace methods. The proposed approach does not require expensive gradient computations. We generate an efficient surrogate model by combining a truncated Karhunen-Loeve (KL) expansion of the output with polynomial chaos expansions, for the output KL modes, constructed in the reduced parameter space. We demonstrate the effectiveness of the proposed surrogate modeling approach with a comprehensive set of numerical experiments, where we consider a number of function-valued (temporally or spatially distributed) QoIs.  more » « less
Award ID(s):
1953271
NSF-PAR ID:
10342085
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Computational geosciences
Volume:
25
Issue:
6
ISSN:
1420-0597
Page Range / eLocation ID:
2167--2189
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The complexity and size of state-of-the-art cell models have significantly increased in part due to the requirement that these models possess complex cellular functions which are thought—but not necessarily proven—to be important. Modern cell mod- els often involve hundreds of parameters; the values of these parameters come, more often than not, from animal experiments whose relationship to the human physiology is weak with very little information on the errors in these measurements. The concomi- tant uncertainties in parameter values result in uncertainties in the model outputs or quantities of interest (QoIs). Global sensitivity analysis (GSA) aims at apportioning to individual parameters (or sets of parameters) their relative contribution to output uncer- tainty thereby introducing a measure of influence or importance of said parameters. New GSA approaches are required to deal with increased model size and complexity; a three-stage methodology consisting of screening (dimension reduction), surrogate modeling, and computing Sobol’ indices, is presented. The methodology is used to analyze a physiologically validated numerical model of neurovascular coupling which possess 160 uncertain parameters. The sensitivity analysis investigates three quantities of interest, the average value of K+ in the extracellular space, the average volumetric flow rate through the perfusing vessel, and the minimum value of the actin/myosin complex in the smooth muscle cell. GSA provides a measure of the influence of each parameter, for each of the three QoIs, giving insight into areas of possible physiological dysfunction and areas of further investigation. 
    more » « less
  2. Abstract

    The development of a reliable and robust surrogate model is often constrained by the dimensionality of the problem. For a system with high‐dimensional inputs/outputs (I/O), conventional approaches usually use a low‐dimensional manifold to describe the high‐dimensional system, where the I/O data are first reduced to more manageable dimensions and then the condensed representation is used for surrogate modeling. In this study, a new solution scheme for this type of problem based on a deep learning approach is presented. The proposed surrogate is based on a particular network architecture, that is, convolutional neural networks. The surrogate architecture is designed in a hierarchical style containing three different levels of model structures, advancing the efficiency and effectiveness of the model in the aspect of training. To assess the model performance, uncertainty quantification is carried out in a continuum mechanics benchmark problem. Numerical results suggest the proposed model is capable of directly inferring a wide variety of I/O mapping relationships. Uncertainty analysis results obtained via the proposed surrogate have successfully characterized the statistical properties of the output fields compared to the Monte Carlo estimates.

     
    more » « less
  3. Abstract

    Modern data collection often entails longitudinal repeated measurements that assume values on a Riemannian manifold. Analyzing such longitudinal Riemannian data is challenging, because of both the sparsity of the observations and the nonlinear manifold constraint. Addressing this challenge, we propose an intrinsic functional principal component analysis for longitudinal Riemannian data. Information is pooled across subjects by estimating the mean curve with local Fréchet regression and smoothing the covariance structure of the linearized data on tangent spaces around the mean. Dimension reduction and imputation of the manifold‐valued trajectories are achieved by utilizing the leading principal components and applying best linear unbiased prediction. We show that the proposed mean and covariance function estimates achieve state‐of‐the‐art convergence rates. For illustration, we study the development of brain connectivity in a longitudinal cohort of Alzheimer's disease and normal participants by modeling the connectivity on the manifold of symmetric positive definite matrices with the affine‐invariant metric. In a second illustration for irregularly recorded longitudinal emotion compositional data for unemployed workers, we show that the proposed method leads to nicely interpretable eigenfunctions and principal component scores. Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative database.

     
    more » « less
  4. . (Ed.)
    This paper proposes a new approach for the adaptive functional estimation of second order infinite dimensional systems with structured perturbations. First, the proposed observer is formulated in the natural second order setting thus ensuring the time derivative of the estimated position is the estimated velocity, and therefore called natural adaptive observer. Assuming that the system does not yield a positive real system when placed in first order form, then the next step in deriving parameter adaptive laws is to assume a form of input-output collocation. Finally, to estimate structured perturbations taking the form of functions of the position and/or velocity outputs, the parameter space is not identified by a finite dimensional Euclidean space but instead is considered in a Reproducing Kernel Hilbert Space. Such a setting allows one not to be restricted by a priori assumptions on the dimension of the parameter spaces. Convergence of the position and velocity errors in their respective norms is established via the use of a parameter-dependent Lyapunov function, specifically formulated for second order infinite dimensional systems that include appropriately defined norms of the functional errors in the reproducing kernel Hilbert spaces. Boundedness of the functional estimates immediately follow and via an appropriate definition of a persistence of excitation condition for functional estimation, a functional convergence follows. When the system is governed by vector second order dynamics, all abstract spaces for the state evolution collapse to a Euclidean space and the natural adaptive observer results simplify. Numerical results of a second order PDE and a multi-degree of freedom finite dimensional mechanical system are presented. 
    more » « less
  5. Yang, Junyuan (Ed.)
    In this work, we develop a new set of Bayesian models to perform registration of real-valued functions. A Gaussian process prior is assigned to the parameter space of time warping functions, and a Markov chain Monte Carlo (MCMC) algorithm is utilized to explore the posterior distribution. While the proposed model can be defined on the infinite-dimensional function space in theory, dimension reduction is needed in practice because one cannot store an infinite-dimensional function on the computer. Existing Bayesian models often rely on some pre-specified, fixed truncation rule to achieve dimension reduction, either by fixing the grid size or the number of basis functions used to represent a functional object. In comparison, the new models in this paper randomize the truncation rule. Benefits of the new models include the ability to make inference on the smoothness of the functional parameters, a data-informative feature of the truncation rule, and the flexibility to control the amount of shape-alteration in the registration process. For instance, using both simulated and real data, we show that when the observed functions exhibit more local features, the posterior distribution on the warping functions automatically concentrates on a larger number of basis functions. Supporting materials including code and data to perform registration and reproduce some of the results presented herein are available online. 
    more » « less