skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows
Assessing the effects of input uncertainty on simulation results for multiphase flows will allow for more robust engineering designs and improved devices. For example, in atomizing jets, surface tension plays a critical role in determining when and how coherent liquid structures break up. Uncertainty in the surface tension coefficient can lead to uncertainty in spray angle, drop size, and velocity distribution. Uncertainty quantification (UQ) determines how input uncertainties affect outputs, and the approach taken can be classified as non-intrusive or intrusive. A classical, non-intrusive approach is the Monte-Carlo scheme, which requires multiple simulation runs using samples from a distribution of inputs. Statistics on output variability are computed from the many simulation outputs. While non-intrusive schemes are straightforward to implement, they can quickly become cost prohibitive, suffer from convergence issues, and have problems with confounding factors, making it difficult to look at uncertainty in multiple variables at once. Alternatively, an intrusive scheme inserts stochastic (uncertain) variables into the governing equations, modifying the mathematics and numerical methods used, but possibly reducing computational cost. In this work, we extend UQ methods developed for single-phase flows to handle gas-liquid multiphase dynamics by developing a stochastic conservative level set approach and a stochastic continuous surface tension method. An oscillating droplet and a 2-D atomizing jet are used to test the method. In these test cases, uncertainty about the surface tension coefficient and initial starting position will be explored, including the impact on breaking/ merging interfaces.  more » « less
Award ID(s):
1511325
PAR ID:
10098577
Author(s) / Creator(s):
;
Date Published:
Journal Name:
14th Triennial International Conference on Liquid Atomization and Spray Systems
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Uncertainty is a common feature in first-principles models that are widely used in various engineering problems. Uncertainty quantification (UQ) has become an essential procedure to improve the accuracy and reliability of model predictions. Polynomial chaos expansion (PCE) has been used as an efficient approach for UQ by approximating uncertainty with orthogonal polynomial basis functions of standard distributions (e.g., normal) chosen from the Askey scheme. However, uncertainty in practice may not be represented well by standard distributions. In this case, the convergence rate and accuracy of the PCE-based UQ cannot be guaranteed. Further, when models involve non-polynomial forms, the PCE-based UQ can be computationally impractical in the presence of many parametric uncertainties. To address these issues, the Gram–Schmidt (GS) orthogonalization and generalized dimension reduction method (gDRM) are integrated with the PCE in this work to deal with many parametric uncertainties that follow arbitrary distributions. The performance of the proposed method is demonstrated with three benchmark cases including two chemical engineering problems in terms of UQ accuracy and computational efficiency by comparison with available algorithms (e.g., non-intrusive PCE). 
    more » « less
  2. Abstract Climate models are generally calibrated manually by comparing selected climate statistics, such as the global top‐of‐atmosphere energy balance, to observations. The manual tuning only targets a limited subset of observational data and parameters. Bayesian calibration can estimate climate model parameters and their uncertainty using a larger fraction of the available data and automatically exploring the parameter space more broadly. In Bayesian learning, it is natural to exploit the seasonal cycle, which has large amplitude compared with anthropogenic climate change in many climate statistics. In this study, we develop methods for the calibration and uncertainty quantification (UQ) of model parameters exploiting the seasonal cycle, and we demonstrate a proof‐of‐concept with an idealized general circulation model (GCM). UQ is performed using the calibrate‐emulate‐sample approach, which combines stochastic optimization and machine learning emulation to speed up Bayesian learning. The methods are demonstrated in a perfect‐model setting through the calibration and UQ of a convective parameterization in an idealized GCM with a seasonal cycle. Calibration and UQ based on seasonally averaged climate statistics, compared to annually averaged, reduces the calibration error by up to an order of magnitude and narrows the spread of the non‐Gaussian posterior distributions by factors between two and five, depending on the variables used for UQ. The reduction in the spread of the parameter posterior distribution leads to a reduction in the uncertainty of climate model predictions. 
    more » « less
  3. Summary This paper presents an approach for efficient uncertainty analysis (UA) using an intrusive generalized polynomial chaos (gPC) expansion. The key step of the gPC‐based uncertainty quantification(UQ) is the stochastic Galerkin (SG) projection, which can convert a stochastic model into a set of coupled deterministic models. The SG projection generally yields a high‐dimensional integration problem with respect to the number of random variables used to describe the parametric uncertainties in a model. However, when the number of uncertainties is large and when the governing equation of the system is highly nonlinear, the SG approach‐based gPC can be challenging to derive explicit expressions for the gPC coefficients because of the low convergence in the SG projection. To tackle this challenge, we propose to use a bivariate dimension reduction method (BiDRM) in this work to approximate a high‐dimensional integral in SG projection with a few one‐ and two‐dimensional integrations. The efficiency of the proposed method is demonstrated with three different examples, including chemical reactions and cell signaling. As compared to other UA methods, such as the Monte Carlo simulations and nonintrusive stochastic collocation (SC), the proposed method shows its superior performance in terms of computational efficiency and UA accuracy. 
    more » « less
  4. null (Ed.)
    Abstract Objective-driven adaptive sampling is a widely used tool for the optimization of deterministic black-box functions. However, the optimization of stochastic simulation models as found in the engineering, biological, and social sciences is still an elusive task. In this work, we propose a scalable adaptive batch sampling scheme for the optimization of stochastic simulation models with input-dependent noise. The developed algorithm has two primary advantages: (i) by recommending sampling batches, the designer can benefit from parallel computing capabilities, and (ii) by replicating of previously observed sampling locations the method can be scaled to higher-dimensional and more noisy functions. Replication improves numerical tractability as the computational cost of Bayesian optimization methods is known to grow cubicly with the number of unique sampling locations. Deciding when to replicate and when to explore depends on what alternative minimizes the posterior prediction accuracy at and around the spatial locations expected to contain the global optimum. The algorithm explores a new sampling location to reduce the interpolation uncertainty and replicates to improve the accuracy of the mean prediction at a single sampling location. Through the application of the proposed sampling scheme to two numerical test functions and one real engineering problem, we show that we can reliably and efficiently find the global optimum of stochastic simulation models with input-dependent noise. 
    more » « less
  5. Stochastic emulation techniques represent a specialized surrogate modeling branch that is appropriate for applications for which the relationship between input and output is stochastic in nature. Their objective is to address the stochastic uncertainty sources by directly predicting the output distribution for a given input. An example of such application, and the focus of this contribution, is the estimation of structural response (engineering demand parameter) distribution in seismic risk assessment. In this case, the stochastic uncertainty originates from the aleatoric variability in the seismic hazard description. Note that this is a different uncertainty-source than the potential parametric uncertainty associated with structural characteristics or explanatory variables for the seismic hazard (for example, intensity measures), that are treated as the parametric input in surrogate modeling context. The key challenge in stochastic emulation pertains to addressing heteroscedasticity in the output variability. Relevant approaches to-date for addressing this challenge have focused on scalar outputs. In contrast, this paper focuses on the multi-output stochastic emulation problem and presents a methodology for predicting the output correlation matrix, while fully addressing heteroscedastic characteristics. This is achieved by introducing a Gaussian Process (GP) regression model for approximating the components of the correlation matrix, and coupling this approximation with a correction step to guarantee positive definite properties for the resultant predictions. For obtaining the observation data to inform the GP calibration, different approaches are examined, relying-or-not on the existence of replicated samples for the response output. Such samples require that, for a portion of the training points, simulations are repeated for the same inputs and different descriptions of the stochastic uncertainty. This information can be readily used to obtain observation for the response statistics (correlation or covariance in this instance) to inform the GP development. An alternative approach is to use as observations noisy covariance samples based on the sample deviations from a primitive mean approximation. These different observation variants lead to different GP variants that are compared within a comprehensive case study. A computational framework for integrating the correlation matrix approximation within the stochastic emulation for the marginal distribution approximation of each output component is also discussed, to provide the joint response distribution approximation. 
    more » « less