Abstract Melt pool dynamics in metal additive manufacturing (AM) is critical to process stability, microstructure formation, and final properties of the printed materials. Physics-based simulation, including computational fluid dynamics (CFD), is the dominant approach to predict melt pool dynamics. However, the physics-based simulation approaches suffer from the inherent issue of very high computational cost. This paper provides a physics-informed machine learning method by integrating the conventional neural networks with the governing physical laws to predict the melt pool dynamics, such as temperature, velocity, and pressure, without using any training data on velocity and pressure. This approach avoids solving the nonlinear Navier–Stokes equation numerically, which significantly reduces the computational cost (if including the cost of velocity data generation). The difficult-to-determine parameters' values of the governing equations can also be inferred through data-driven discovery. In addition, the physics-informed neural network (PINN) architecture has been optimized for efficient model training. The data-efficient PINN model is attributed to the extra penalty by incorporating governing PDEs, initial conditions, and boundary conditions in the PINN model.
more »
« less
Stacked Deep Learning Models for Fast Approximations of Steady-State Navier–Stokes Equations for Low Re Flow
Computational fluid dynamics (CFD) simulations are broadly used in many engineering and physics fields. CFD requires the solution of the Navier–Stokes (N-S) equations under complex flow and boundary conditions. However, applications of CFD simulations are computationally limited by the availability, speed, and parallelism of high-performance computing. To address this, machine learning techniques have been employed to create data-driven approximations for CFD to accelerate computational efficiency. Unfortunately, these methods predominantly depend on large labeled CFD datasets, which are costly to procure at the scale required for robust model development. In response, we introduce a weakly supervised approach that, through a multichannel input capturing boundary and geometric conditions, solves steady-state N-S equations. Our method achieves state-of-the-art results without relying on labeled simulation data, instead using a custom data-driven and physics-informed loss function and small-scale solutions to prime the model for solving the N-S equations. By training stacked models, we enhance resolution and predictability, yielding high-quality numerical solutions to N-S equations without hefty computational demands. Remarkably, our model, being highly adaptable, produces solutions on a 512 × 512 domain in a swift 7 ms, outpacing traditional CFD solvers by a factor of 1,000. This paves the way for real-time predictions on consumer hardware and Internet of Things devices, thereby boosting the scope, speed, and cost-efficiency of solving boundary-value fluid problems.
more »
« less
- PAR ID:
- 10532693
- Publisher / Repository:
- Intelligent Computing
- Date Published:
- Journal Name:
- Intelligent Computing
- Volume:
- 3
- ISSN:
- 2771-5892
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Brehm, Christoph; Pandya, Shishir (Ed.)Computational fluid dynamics (CFD) and its uncertainty quantification are computationally expensive. We use Gaussian Process (GP) methods to demonstrate that machine learning can build efficient and accurate surrogate models to replace CFD simulations with significantly reduced computational cost without compromising the physical accuracy. We also demonstrate that both epistemic uncertainty (machine learning model uncertainty) and aleatory uncertainty (randomness in the inputs of CFD) can be accommodated when the machine learning model is used to reveal fluid dynamics. The demonstration is performed by applying simulation of Hagen-Poiseuille and Womersley flows that involve spatial and spatial-tempo responses, respectively. Training points are generated by using the analytical solutions with evenly discretized spatial or spatial-temporal variables. Then GP surrogate models are built using supervised machine learning regression. The error of the GP model is quantified by the estimated epistemic uncertainty. The results are compared with those from GPU-accelerated volumetric lattice Boltzmann simulations. The results indicate that surrogate models can produce accurate fluid dynamics (without CFD simulations) with quantified uncertainty when both epistemic and aleatory uncertainties exist.more » « less
-
For energy-assisted compression ignition (EACI) engine propulsion at high-altitude operating conditions using sustainable jet fuels with varying cetane numbers, it is essential to develop an efficient engine control system for robust and optimal operation. Control systems are typically trained using experimental data, which can be costly and time consuming to generate due to setup time of experiments, unforeseen delays/issues with manufacturing, mishaps/engine failures and the consequent repairs (which can take weeks), and errors in measurements. Computational fluid dynamics (CFD) simulations can overcome such burdens by complementing experiments with simulated data for control system training. Such simulations, however, can be computationally expensive. Existing data-driven machine learning (ML) models have shown promise for emulating the expensive CFD simulator, but encounter key limitations here due to the expensive nature of the training data and the range of differing combustion behaviors (e.g. misfires and partial/delayed ignition) observed at such broad operating conditions. We thus develop a novel physics-integrated emulator, called the Misfire-Integrated GP (MInt-GP), which integrates important auxiliary information on engine misfires within a Gaussian process surrogate model. With limited CFD training data, we show the MInt-GP model can yield reliable predictions of in-cylinder pressure evolution profiles and subsequent heat release profiles and engine CA50 predictions at a broad range of input conditions. We further demonstrate much better prediction capabilities of the MInt-GP at different combustion behaviors compared to existing data-driven ML models such as kriging and neural networks, while also observing up to 80 times computational speed-up over CFD, thus establishing its effectiveness as a tool to assist CFD for fast data generation in control system training.more » « less
-
Computational fluid dynamics (CFD) is increasingly used to study blood flows in patient-specific arteries for understanding certain cardiovascular diseases. The techniques work quite well for relatively simple problems but need improvements when the problems become harder when (a) the geometry becomes complex (eg, a few branches to a full pulmonary artery), (b) the model becomes more complex (eg, fluid-only to coupled fluid-structure interaction), (c) both the fluid and wall models become highly nonlinear, and (d) the computer on which we run the simulation is a supercomputer with tens of thousands of processor cores. To push the limit of CFD in all four fronts, in this paper, we develop and study a highly parallel algorithm for solving a monolithically coupled fluid-structure system for the modeling of the interaction of the blood flow and the arterial wall. As a case study, we consider a patient-specific, full size pulmonary artery obtained from computed tomography (CT) images, with an artificially added layer of wall with a fixed thickness. The fluid is modeled with a system of incompressible Navier-Stokes equations, and the wall is modeled by a geometrically nonlinear elasticity equation. As far as we know, this is the first time the unsteady blood flow in a full pulmonary artery is simulated without assuming a rigid wall. The proposed numerical algorithm and software scale well beyond 10 000 processor cores on a supercomputer for solving the fluid-structure interaction problem discretized with a stabilized finite element method in space and an implicit scheme in time involving hundreds of millions of unknowns.more » « less
-
Abstract Applying full-waveform methods to image small-scale structures of geophysical interest buried within the Earth requires the computation of the seismic wavefield over large distances compared to the target wavelengths. This represents a considerable computational cost when using state-of-the-art numerical integration of the equations of motion in three-dimensional earth models. “Box Tomography” is a hybrid method that breaks up the wavefield computation into three parts, only one of which needs to be iterated for each model update, significantly saving computational time. To deploy this method in remote regions containing a fluid-solid boundary, one needs to construct artificial sources that confine the seismic wavefield within a small region that straddles this boundary. The difficulty arises from the need to combine the solid-fluid coupling with a hybrid numerical simulation in this region. Here, we report a reconciliation of different displacement potential expressions used for solving the acoustic wave equation and propose a unified framework for hybrid simulations. This represents a significant step towards applying ’Box Tomography’ in arbitrary regions inside the Earth, achieving a thousand-fold computational cost reduction compared to standard approaches without compromising accuracy. We also present examples of benchmarks of the hybrid simulations in the case of target regions at the ocean floor and the core-mantle boundary.more » « less