skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1663667

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In computational mechanics, multiple models are often present to describe a physical system. While Bayesian model selection is a helpful tool to compare these models using measurement data, it requires the computationally expensive estimation of a multidimensional integral — known as the marginal likelihood or as the model evidence (i.e., the probability of observing the measured data given the model) — over the multidimensional parameter domain. This study presents efficient approaches for estimating this marginal likelihood by transforming it into a one-dimensional integral that is subsequently evaluated using a quadrature rule at multiple adaptively-chosen iso-likelihood contour levels. Three different algorithms are proposed to estimate the probability mass at each adapted likelihood level using samples from importance sampling, stratified sampling, and Markov chain Monte Carlo (MCMC) sampling, respectively. The proposed approach is illustrated — with comparisons to Monte Carlo, nested, and MultiNest sampling — through four numerical examples. The first, an elementary example, shows the accuracies of the three proposed algorithms when the exact value of the marginal likelihood is known. The second example uses an 11-story building subjected to an earthquake excitation with an uncertain hysteretic base isolation layer with two models to describe the isolation layer behavior. The third example considers flow past a cylinder when the inlet velocity is uncertain. Based on the these examples, the method with stratified sampling is by far the most accurate and efficient method for complex model behavior in low dimension, particularly considering that this method can be implemented to exploit parallel computation. In the fourth example, the proposed approach is applied to heat conduction in an inhomogeneous plate with uncertain thermal conductivity modeled through a 100 degree-of-freedom Karhunen–Loève expansion. The results indicate that MultiNest cannot efficiently handle the high-dimensional parameter space, whereas the proposed MCMC-based method more accurately and efficiently explores the parameter space. The marginal likelihood results for the last three examples — when compared with the results obtained from standard Monte Carlo sampling, nested sampling, and MultiNest algorithm — show good agreement. 
    more » « less
    Free, publicly-accessible full text available October 1, 2026
  2. The objective of this work is to provide a Bayesian re-interpretation to model falsification. We show that model falsification can be viewed as an approximate Bayesian computation (ABC) approach when hypotheses (models) are sampled from a prior. To achieve this, we recast model falsifiers as discrepancy metrics and density kernels such that they may be adopted within ABC and generalized ABC (GABC) methods. We call the resulting frameworks model falsified ABC and GABC, respectively. Moreover, as a result of our reinterpretation, the set of unfalsified models can be shown to be realizations of an approximate posterior. We consider both error and likelihood domain model falsification in our exposition. Model falsified (G)ABC is used to tackle two practical inverse problems albeit with synthetic measurements. The first type of problem concerns parameter estimation and includes applications of ABC to the inference of a statistical model where the likelihood can be difficult to compute, and the identification of a cubic-quintic dynamical system. The second type of example involves model selection for the base isolation system of a four degree-of-freedom base isolated structure. The performance of model falsified ABC and GABC are compared with Bayesian inference. The results show that model falsified (G)ABC can be used to solve inverse problems in a computationally efficient manner. The results are also used to compare the various falsifiers in their capability of approximating the posterior and some of its important statistics. Further, we show that model falsifier based density kernels can be used in kernel regression to infer unknown model parameters and compute structural responses under epistemic uncertainty. 
    more » « less
  3. We introduce a novel framework called REIN: Reliability Estimation by learning an Importance sampling (IS) distribution with Normalizing flows (NFs). The NFs learn probability space maps that transform the probability distribution of the input random variables into a quasi-optimal IS distribution. NFs stack together invertible neural networks to construct differentiable bijections with efficiently computed Jacobian determinants. The NF 'pushes forward' a realization from the input probability distribution into a realization from the IS distribution, with importance weights calculated using the change of variables formula. We also propose a loss function to learn a NF map that minimizes the reverse Kullback-Leibler divergence between the 'pushforward' distribution and a sequentially updated target distribution obtained by modifying the optimal IS distribution. We demonstrate REIN's efficacy on a set of benchmark problems that feature very low failure rates, multiple failure modes and high dimensionality, while comparing against other variance reduction methods. We also consider two simple applications, the reliability analyses of a thirty-four story building and a cantilever tube, to demonstrate the applicability of REIN to practical problems of interest. As compared to other methods, REIN is shown to be useful for high-dimensional reliability estimation problems with very small failure probabilities. 
    more » « less
  4. We propose a novel modular inference approach combining two different generative models — generative adversarial networks (GAN) and normalizing flows — to approximate the posterior distribution of physics-based Bayesian inverse problems framed in high-dimensional ambient spaces. We dub the proposed framework GAN-Flow. The proposed method leverages the intrinsic dimension reduction and superior sample generation capabilities of GANs to define a low-dimensional data-driven prior distribution. Once a trained GAN-prior is available, the inverse problem is solved entirely in the latent space of the GAN using variational Bayesian inference with normalizing flow-based variational distribution, which approximates low-dimensional posterior distribution by transforming realizations from the low-dimensional latent prior (Gaussian) to corresponding realizations of a low-dimensional variational posterior distribution. The trained GAN generator then maps realizations from this approximate posterior distribution in the latent space back to the high-dimensional ambient space. We also propose a two-stage training strategy for GAN-Flow wherein we train the two generative models sequentially. Thereafter, GAN-Flow can estimate the statistics of posterior-predictive quantities of interest at virtually no additional computational cost. The synergy between the two types of generative models allows us to overcome many challenges associated with the application of Bayesian inference to large-scale inverse problems, chief among which are describing an informative prior and sampling from the high-dimensional posterior. GAN-Flow does not involve Markov chain Monte Carlo simulation, making it particularly suitable for solving large-scale inverse problems. We demonstrate the efficacy and flexibility of GAN-Flow on various physics-based inverse problems of varying ambient dimensionality and prior knowledge using different types of GANs and normalizing flows. Notably, one of the applications we consider involves a 65,536-dimensional inverse problem of phase retrieval wherein an object is reconstructed from sparse noisy measurements of the magnitude of its Fourier transform. 
    more » « less
  5. Large-scale seismic structural tests are crucial to validating both structural design methodologies and the effectiveness of seismic isolation devices. However, considering the significant costs of such tests, it is essential to leverage data from completed tests by taking advantage of numerical models of the tested structures, updated using data collected from the experiments, to complete additional studies that may be difficult, unsafe or impossible to physically test. However, updating complex numerical models poses its own challenges. The first contribution of this paper is to develop a multi-stage model updating method suitable for high-order models of base-isolated structures, which is motivated and evaluated through modeling and model updating of a full-scale four-story base-isolated reinforced-concrete frame building that was tested in 2013 at the NIED E-Defense laboratory in Japan. In most studies involving model updating, all to-be-updated parameters are typically updated simultaneously; however, given the observation that the superstructure in this study predominantly moves as a rigid body in low-frequency modes and the isolation layer plays a minor role in all other modes, this study proposes updating parameters in stages: first, the linear superstructure parameters are updated so that its natural frequencies and mode shapes match those identified via a subspace system identification of the experimental building responses to low-level random excitations; then, the isolation-layer device linear parameters are updated so that the natural frequencies, damping ratios and mode shapes of the three isolation modes match. These two stages break a large-scale linear model updating problem into two smaller problems, thereby reducing the search space for the to-be-updated parameters, which generally reduces computational costs regardless of what optimization algorithm is adopted. Due to the limited instrumentation, the identified modes constitute only a subset of all the modes; to match each identified mode with a FEM mode, a procedure is proposed to compare each identified mode with a candidate set of FEM modes and to select the best match, which is a second contribution. Further, nonlinear isolation-layer device models are proposed, updated and validated with experimental data. Finally, combining the isolation-layer devices' nonlinear models with the updated superstructure linear FEM, the final result is a data-calibrated nonlinear numerical model that will be used for further studies of controllable damping and validation of new design methodologies, and is being made available for use by the research community, alleviating the dearth of experimentally-calibrated numerical models of full-scale base-isolated buildings with lateral-torsional coupling effects. 
    more » « less
  6. Semiactive model predictive control (sMPC) can be very effective, but its computational cost due to the inherent mixed-integer quadratic programming (MIQP) optimization precludes its use in real-time vibration control. This study proposes training neural networks (NNs) to predict in real-time only the MIQP's integer variables' values, called a strategy, for a given structure state. Because the number of strategies is exponential in the number of sMPC horizon steps, the resulting NN can be massive. This study proposes to reduce the NN dimension by exploiting the homogeneity-of-order-one nature of this control problem and, using state vector statistics, to efficiently choose training samples. The single large NN is proposed to be split into several much smaller NNs, each predicting a strategy grouping, that together uniquely and efficiently predict the strategy. Given the strategy's integer values, the MIQP optimization reduces to a quadratic programming (QP) problem, solved using a fast QP solver with proposed adaptations: exploiting optimization efficiencies and bounding sub-optimality; using several NN predictions; and reverting to a simpler (suboptimal) semiactive control algorithm upon occasional incorrect NN predictions or QP solver nonconvergence. Shear building examples demonstrate significant online computational cost reductions with control performance comparable to the conventional MIQP-based control. 
    more » « less
  7. Optimal sensor placement is critical for enhancing the effectiveness of monitoring dynamical systems. Deterministic solutions do not reflect the effects of input and parameter uncertainty on the sensor placement. Using a Markov decision process (MDP) and a sensor placement agent, this study proposes a stochastic approach to maximize the gain from placing a fixed number of sensors within the system. Utilizing Deep Reinforcement Learning (DRL), the agent is trained by collecting interactive samples within the environment, which uses an information-theoretic reward function that is a measure, based on Shannon entropy, of the identifiability of the model parameters. The goal of the agent is to maximize its expected future reward by selecting, at each step, the action (placing a sensor) that provides the most information. This framework is validated using a synthetic model of a base-isolated structure. To consider the existing uncertainty in the parameters, a prior probability distribution is chosen (e.g., based on expert judgement or preliminary study) for each model parameter. Further, a probabilistic model for the input is used to reflect input variability. In a Deep Q-network, a type of DRL algorithm, the agent learns a mapping from states (i.e., sensor configurations) to the "quality" of each action at that state, called "Q-values". This network is trained using samples of state, action, and reward by interacting with the environment. The modular property of the framework and the function approximation used in this study makes it scalable to complex real-world applications of sensor placement problems in the presence of uncertainties. 
    more » « less