skip to main content


Title: Derivative Based Global Sensitivity Analysis using Conjugate Unscented Transforms
In this paper, a novel way to compute derivativebased global sensitivity measures is presented. Conjugate Unscented Transform (CUT) is used to evaluate the multidimensional definite integrals which lead to the sensitivity measures. The method is compared with Monte Carlo estimates as well as the screening method of Morris. It is shown that using CUT provides a much more accurate estimate of sensitivity measures as compared to Monte Carlo (with far lesser computational cost) as well as the Morris method (with similar computational cost). Illustrations on three test functions are presented as evidence.  more » « less
Award ID(s):
1537210
NSF-PAR ID:
10113120
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 American Control Conference (ACC)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Global sensitivity analysis aims at quantifying and ranking the relative contribution of all the uncertain inputs of a mathematical model that impact the uncertainty in the output of that model, for any input-output mapping. Motivated by the limitations of the well-established Sobol' indices which are variance-based, there has been an interest in the development of non-moment-based global sensitivity metrics. This paper presents two complementary classes of metrics (one of which is a generalization of an already existing metric in the literature) which are based on the statistical distances between probability distributions rather than statistical moments. To alleviate the large computational cost associated with Monte Carlo sampling of the input-output model to estimate probability distributions, polynomial chaos surrogate models are proposed to be used. The surrogate models in conjunction with sparse quadrature-based rules, such as conjugate unscented transforms, permit efficient calculation of the proposed global sensitivity measures. Three benchmark sensitivity analysis examples are used to illustrate the proposed approach. 
    more » « less
  2. Abstract We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O ( 1 0 4 ) model runs, or more. Moreover, the forward model is often given as a black box or is impractical to differentiate. Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems. The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator. Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it. This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior. Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach. The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model. Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically. The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems. 
    more » « less
  3. State-of-the-art many-body wave function techniques rely on heuristics to achieve high accuracy at an attainable computational cost to solve the many-body Schrödinger equation. By far, the most common property used to assess accuracy has been the total energy; however, total energies do not give a complete picture of electron correlation. In this work, we assess the von Neumann entropy of the one-particle reduced density matrix (1-RDM) to compare selected configuration interaction (CI), coupled cluster, variational Monte Carlo, and fixed-node diffusion Monte Carlo for benchmark hydrogen chains. A new algorithm, the circle reject method, is presented, which improves the efficiency of evaluating the von Neumann entropy using quantum Monte Carlo by several orders of magnitude. The von Neumann entropy of the 1-RDM and the eigenvalues of the 1-RDM are shown to distinguish between the dynamic correlation introduced by the Jastrow and the static correlation introduced by determinants with large weights, confirming some of the lore in the field concerning the difference between the selected CI and Slater–Jastrow wave functions.

     
    more » « less
  4. Unknown (Ed.)

    We present a method for calculating first-order response properties in phaseless auxiliary field quantum Monte Carlo by applying automatic differentiation (AD). Biases and statistical efficiency of the resulting estimators are discussed. Our approach demonstrates that AD enables the calculation of reduced density matrices with the same computational cost scaling per sample as energy calculations, accompanied by a cost prefactor of less than four in our numerical calculations. We investigate the role of self-consistency and trial orbital choice in property calculations. We find that orbitals obtained using density functional theory perform well for the dipole moments of selected molecules compared to those optimized self-consistently.

     
    more » « less
  5. null (Ed.)
    Abstract Deep neural networks (DNNs) have achieved state-of-the-art performance in many important domains, including medical diagnosis, security, and autonomous driving. In domains where safety is highly critical, an erroneous decision can result in serious consequences. While a perfect prediction accuracy is not always achievable, recent work on Bayesian deep networks shows that it is possible to know when DNNs are more likely to make mistakes. Knowing what DNNs do not know is desirable to increase the safety of deep learning technology in sensitive applications; Bayesian neural networks attempt to address this challenge. Traditional approaches are computationally intractable and do not scale well to large, complex neural network architectures. In this paper, we develop a theoretical framework to approximate Bayesian inference for DNNs by imposing a Bernoulli distribution on the model weights. This method called Monte Carlo DropConnect (MC-DropConnect) gives us a tool to represent the model uncertainty with little change in the overall model structure or computational cost. We extensively validate the proposed algorithm on multiple network architectures and datasets for classification and semantic segmentation tasks. We also propose new metrics to quantify uncertainty estimates. This enables an objective comparison between MC-DropConnect and prior approaches. Our empirical results demonstrate that the proposed framework yields significant improvement in both prediction accuracy and uncertainty estimation quality compared to the state of the art. 
    more » « less