For energy-assisted compression ignition (EACI) engine propulsion at high-altitude operating conditions using sustainable jet fuels with varying cetane numbers, it is essential to develop an efficient engine control system for robust and optimal operation. Control systems are typically trained using experimental data, which can be costly and time consuming to generate due to setup time of experiments, unforeseen delays/issues with manufacturing, mishaps/engine failures and the consequent repairs (which can take weeks), and errors in measurements. Computational fluid dynamics (CFD) simulations can overcome such burdens by complementing experiments with simulated data for control system training. Such simulations, however, can be computationally expensive. Existing data-driven machine learning (ML) models have shown promise for emulating the expensive CFD simulator, but encounter key limitations here due to the expensive nature of the training data and the range of differing combustion behaviors (e.g. misfires and partial/delayed ignition) observed at such broad operating conditions. We thus develop a novel physics-integrated emulator, called the Misfire-Integrated GP (MInt-GP), which integrates important auxiliary information on engine misfires within a Gaussian process surrogate model. With limited CFD training data, we show the MInt-GP model can yield reliable predictions of in-cylinder pressure evolution profiles and subsequent heat release profiles and engine CA50 predictions at a broad range of input conditions. We further demonstrate much better prediction capabilities of the MInt-GP at different combustion behaviors compared to existing data-driven ML models such as kriging and neural networks, while also observing up to 80 times computational speed-up over CFD, thus establishing its effectiveness as a tool to assist CFD for fast data generation in control system training.
more »
« less
Physics-Based Regression vs. CFD for Hagen-Poiseuille and Womersley Flows and Uncertainty Quantification
Computational fluid dynamics (CFD) and its uncertainty quantification are computationally expensive. We use Gaussian Process (GP) methods to demonstrate that machine learning can build efficient and accurate surrogate models to replace CFD simulations with significantly reduced computational cost without compromising the physical accuracy. We also demonstrate that both epistemic uncertainty (machine learning model uncertainty) and aleatory uncertainty (randomness in the inputs of CFD) can be accommodated when the machine learning model is used to reveal fluid dynamics. The demonstration is performed by applying simulation of Hagen-Poiseuille and Womersley flows that involve spatial and spatial-tempo responses, respectively. Training points are generated by using the analytical solutions with evenly discretized spatial or spatial-temporal variables. Then GP surrogate models are built using supervised machine learning regression. The error of the GP model is quantified by the estimated epistemic uncertainty. The results are compared with those from GPU-accelerated volumetric lattice Boltzmann simulations. The results indicate that surrogate models can produce accurate fluid dynamics (without CFD simulations) with quantified uncertainty when both epistemic and aleatory uncertainties exist.
more »
« less
- Award ID(s):
- 1803845
- PAR ID:
- 10381940
- Editor(s):
- Brehm, Christoph; Pandya, Shishir
- Date Published:
- Journal Name:
- Eleventh International Conference on Computational Fluid Dynamics (ICCFD11)
- Page Range / eLocation ID:
- 1 - 11
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Modelling of fluid–particle interactions is a major area of research in many fields of science and engineering. There are several techniques that allow modelling of such interactions, among which the coupling of computational fluid dynamics (CFD) and the discrete element method (DEM) is one of the most convenient solutions due to the balance between accuracy and computational costs. However, the accuracy of this method is largely dependent upon mesh size, where obtaining realistic results always comes with the necessity of using a small mesh and thereby increasing computational intensity. To compensate for the inaccuracies of using a large mesh in such modelling, and still take advantage of rapid computations, we extended the classical modelling by combining it with a machine learning model. We have conducted seven simulations where the first one is a numerical model with a fine mesh (i.e. ground truth) with a very high computational time and accuracy, the next three models are constructed on coarse meshes with considerably less accuracy and computational burden and the last three models are assisted by machine learning, where we can obtain large improvements in terms of observing fine-scale features yet based on a coarse mesh. The results of this study show that there is a great opportunity in machine learning towards improving classical fluid–particle modelling approaches by producing highly accurate models for large-scale systems in a reasonable time.more » « less
-
Computational fluid dynamics (CFD) simulations are broadly used in many engineering and physics fields. CFD requires the solution of the Navier–Stokes (N-S) equations under complex flow and boundary conditions. However, applications of CFD simulations are computationally limited by the availability, speed, and parallelism of high-performance computing. To address this, machine learning techniques have been employed to create data-driven approximations for CFD to accelerate computational efficiency. Unfortunately, these methods predominantly depend on large labeled CFD datasets, which are costly to procure at the scale required for robust model development. In response, we introduce a weakly supervised approach that, through a multichannel input capturing boundary and geometric conditions, solves steady-state N-S equations. Our method achieves state-of-the-art results without relying on labeled simulation data, instead using a custom data-driven and physics-informed loss function and small-scale solutions to prime the model for solving the N-S equations. By training stacked models, we enhance resolution and predictability, yielding high-quality numerical solutions to N-S equations without hefty computational demands. Remarkably, our model, being highly adaptable, produces solutions on a 512 × 512 domain in a swift 7 ms, outpacing traditional CFD solvers by a factor of 1,000. This paves the way for real-time predictions on consumer hardware and Internet of Things devices, thereby boosting the scope, speed, and cost-efficiency of solving boundary-value fluid problems.more » « less
-
Yamashita, Y.; Kano, M. (Ed.)Bayesian hybrid models (BHMs) fuse physics-based insights with machine learning constructs to correct for systematic bias. In this paper, we demonstrate a scalable computational strategy to embed BHMs in an equation-oriented modelling environment. Thus, this paper generalizes stochastic programming, which traditionally focuses on aleatoric uncertainty (as characterized by a probability distribution for uncertainty model parameters) to also consider epistemic uncertainty, i.e., mode-form uncertainty or systematic bias as modelled by the Gaussian process in the BHM. As an illustrative example, we consider ballistic firing using a BHM that includes a simplified glass-box (i.e., equation-oriented) model that neglects air resistance and a Gaussian process model to account for systematic bias (i.e., epistemic or model-form uncertainty) induced from the model simplification. The gravity parameter and the GP hypermeters are inferred from data in a Bayesian framework, yielding a posterior distribution. A novel single-stage stochastic program formulation using the posterior samples and Gaussian quadrature rules is proposed to compute the optimal decisions (e.g., firing angle and velocity) that minimize the expected value of an objective (e.g., distance from a stationary target). PySMO is used to generate expressions for the GP prediction mean and uncertainty in Pyomo, enabling efficient optimization with gradient-based solvers such as Ipopt. A scaling study characterizes the solver time and number of iterations for up to 2,000 samples from the posterior.more » « less
-
Designing and/or controlling complex systems in science and engineering relies on appropriate mathematical modeling of systems dynamics. Classical differential equation based solutions in applied and computational mathematics are often computationally demanding. Recently, the connection between reduced-order models of high-dimensional differential equation systems and surrogate machine learning models has been explored. However, the focus of both existing reduced-order and machine learning models for complex systems has been how to best approximate the high fidelity model of choice. Due to high complexity and often limited training data to derive reduced-order or machine learning surrogate models, it is critical for derived reduced-order models to have reliable uncertainty quantification at the same time. In this paper, we propose such a novel framework of Bayesian reduced-order models naturally equipped with uncertainty quantification as it learns the distributions of the parameters of the reduced-order models instead of their point estimates. In particular, we develop learnable Bayesian proper orthogonal decomposition (BayPOD) that learns the distributions of both the POD projection bases and the mapping from the system input parameters to the projected scores/coefficients so that the learned BayPOD can help predict high-dimensional systems dynamics/fields as quantities of interest in different setups with reliable uncertainty estimates. The developed learnable BayPOD inherits the capability of embedding physics constraints when learning the POD-based surrogate reduced-order models, a desirable feature when studying complex systems in science and engineering applications where the available training data are limited. Furthermore, the proposed BayPOD method is an end-to-end solution, which unlike other surrogate-based methods, does not require separate POD and machine learning steps. The results from a real-world case study of the pressure field around an airfoil.more » « less