skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on February 1, 2026

Title: Minimizing nature's cost: Exploring data-free physics-informed neural network solvers for fluid mechanics applications
In this paper, we present a novel approach for fluid dynamic simulations by leveraging the capabilities of Physics-Informed Neural Networks (PINNs) guided by the newly unveiled Principle of Minimum Pressure Gradient (PMPG). In a PINN formulation, the physics problem is converted into a minimization problem (typically least squares). The PMPG asserts that for incompressible flows, the total magnitude of the pressure gradient over the domain must be minimum at every time instant, turning fluid mechanics into minimization problems, making it an excellent choice for PINNs formulation. Following the PMPG, the proposed PINN formulation seeks to construct a neural network for the flow field that minimizes Nature's cost function for incompressible flows in contrast to traditional PINNs that minimize the residuals of the Navier–Stokes equations. This technique eliminates the need to train a separate pressure model, thereby reducing training time and computational costs. We demonstrate the effectiveness of this approach through a case study of inviscid flow around a cylinder. The proposed approach outperforms the traditional PINNs approach in terms of training time, convergence rate, and compliance with physical metrics. While demonstrated on a simple geometry, the methodology is extensible to more complex flow fields (e.g., three-dimensional, unsteady, and viscous flows) within the incompressible realm, which is the region of applicability of the PMPG.  more » « less
Award ID(s):
2332556
PAR ID:
10632204
Author(s) / Creator(s):
; ;
Publisher / Repository:
AIP
Date Published:
Journal Name:
Physics of Fluids
Volume:
37
Issue:
2
ISSN:
1070-6631
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. PSO-PINN is a class of algorithms for training physics-informed neural networks (PINN) using particle swarm optimization (PSO). PSO-PINN can mitigate the well-known difficulties presented by gradient descent training of PINNs when dealing with PDEs with irregular solutions. Additionally, PSO-PINN is an ensemble approach to PINN that yields reproducible predictions with quantified uncertainty. In this paper, we introduce Multi-Objective PSO-PINN, which treats PINN training as a multi-objective problem. The proposed multi-objective PSO-PINN represents a new paradigm in PINN training, which thus far has relied on scalarizations of the multi-objective loss function. A full multi-objective approach allows on-the-fly compromises in the trade-off among the various components of the PINN loss function. Experimental results with a diffusion PDE problem demonstrate the promise of this methodology. 
    more » « less
  2. Most variational principles in classical mechanics are based on the principle of least action, which is only a stationary principle. In contrast, Gauss' principle of least constraint is a true minimum principle. In this paper, we apply Gauss' principle to the mechanics of incompressible flows, thereby discovering the fundamental quantity that Nature minimizes in most flows encountered in everyday life. We show that the magnitude of the pressure gradient over the domain is minimum at every instant of time. We call it the principle of minimum pressure gradient (PMPG). It turns a fluid mechanics problem into a minimization one. We demonstrate this intriguing property by solving four classical problems in fluid mechanics using the PMPG without resorting to Navier–Stokes' equation. In some cases, the PMPG minimization approach is not any more efficient than solving Navier–Stokes'. However, in other cases, it is more insightful and efficient. In fact, the inviscid version of the PMPG allowed solving the long-standing problem of the aerohydrodynamic lift over smooth cylindrical shapes where Euler's equation fails to provide a unique answer. The PMPG transcends Navier–Stokes' equations in its applicability to non-Newtonian fluids with arbitrary constitutive relations and fluids subject to arbitrary forcing (e.g., electromagnetic). 
    more » « less
  3. Haizhao Yang (Ed.)
    Biot's consolidation model in poroelasticity describes the interaction between the fluid and the deformable porous structure. Based on the fixed-stress splitting iterative method proposed by Mikelic et al. (Computat Geosci, 2013), we present a network approach to solve Biot's consolidation model using physics-informed neural networks (PINNs). Methods Two independent and small neural networks are used to solve the displacement and pressure variables separately. Accordingly, separate loss functions are proposed, and the fixed stress splitting iterative algorithm is used to couple these variables. Error analysis is provided to support the capability of the proposed fixed-stress splitting-based PINNs (FS-PINNs). Results Several numerical experiments are performed to evaluate the effectiveness and accuracy of our approach, including the pure Dirichlet problem, the mixed partial Neumann and partial Dirichlet problem, and the Barry-Mercer's problem. The performance of FS-PINNs is superior to traditional PINNs, demonstrating the effectiveness of our approach. Discussion Our study highlights the successful application of PINNs with the fixed-stress splitting iterative method to tackle Biot's model. The ability to use independent neural networks for displacement and pressure offers computational advantages while maintaining accuracy. The proposed approach shows promising potential for solving other similar geoscientific problems. 
    more » « less
  4. This study employs physics-informed neural networks (PINNs) to reconstruct multiple flow fields in a transient natural convection system solely based on instantaneous temperature data at an arbitrary moment. Transient convection problems present reconstruction challenges due to the temporal variability of fields across different flow phases. In general, large reconstruction errors are observed during the incipient phase, while the quasi-steady phase exhibits relatively smaller errors, reduced by a factor of 2–4. We hypothesize that reconstruction errors vary across different flow phases due to the changing solution space of a PINN, inferred from the temporal gradients of the fields. Furthermore, we find that reconstruction errors tend to accumulate in regions where the spatial gradients are smaller than the order of 10−6, likely due to the vanishing gradient phenomenon. In convection phenomena, field variations often manifest across multiple scales in space. However, PINN-based reconstruction tends to preserve larger-scale variations, while smaller-scale variations become less pronounced due to the vanishing gradient problem. To mitigate the errors associated with vanishing gradients, we introduce a multi-scale approach that determines scaling constants for the PINN inputs and reformulates inputs across multiple scales. This approach improves the maximum and mean errors by 72.2% and 6.4%, respectively. Our research provides insight into the behavior of PINNs when applied to transient convection problems with large solution space and field variations across multiple scales. 
    more » « less
  5. This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training dataset. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a $$\ epsilon-\delta$$ stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a $$\epsilon-\delta$$ stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations. 
    more » « less