skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks and Operators in Scientific Computing: Fluid and Solid Mechanics
Abstract Advancements in computing power have recently made it possible to utilize machine learning and deep learning to push scientific computing forward in a range of disciplines, such as fluid mechanics, solid mechanics, materials science, etc. The incorporation of neural networks is particularly crucial in this hybridization process. Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data are sparse, which is the case in many scientific and engineering domains. Nonetheless, neural networks provide a solid foundation to respect physics-driven or knowledge-based constraints during training. Generally speaking, there are three distinct neural network frameworks to enforce the underlying physics: (i) physics-guided neural networks (PgNNs), (ii) physics-informed neural networks (PiNNs), and (iii) physics-encoded neural networks (PeNNs). These methods provide distinct advantages for accelerating the numerical modeling of complex multiscale multiphysics phenomena. In addition, the recent developments in neural operators (NOs) add another dimension to these new simulation paradigms, especially when the real-time prediction of complex multiphysics systems is required. All these models also come with their own unique drawbacks and limitations that call for further fundamental research. This study aims to present a review of the four neural network frameworks (i.e., PgNNs, PiNNs, PeNNs, and NOs) used in scientific computing research. The state-of-the-art architectures and their applications are reviewed, limitations are discussed, and future research opportunities are presented in terms of improving algorithms, considering causalities, expanding applications, and coupling scientific and deep learning solvers.  more » « less
Award ID(s):
2122041
PAR ID:
10509576
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
Journal of Computing and Information Science in Engineering
Date Published:
Journal Name:
Journal of Computing and Information Science in Engineering
Volume:
24
Issue:
4
ISSN:
1530-9827
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Pappas, George; Ravikumar, Pradeep; Seshia, Sanjit A (Ed.)
    We study the problem of learning neural network models for Ordinary Differential Equations (ODEs) with parametric uncertainties. Such neural network models capture the solution to the ODE over a given set of parameters, initial conditions, and range of times. Physics-Informed Neural Networks (PINNs) have emerged as a promising approach for learning such models that combine data-driven deep learning with symbolic physics models in a principled manner. However, the accuracy of PINNs degrade when they are used to solve an entire family of initial value problems characterized by varying parameters and initial conditions. In this paper, we combine symbolic differentiation and Taylor series methods to propose a class of higher-order models for capturing the solutions to ODEs. These models combine neural networks and symbolic terms: they use higher order Lie derivatives and a Taylor series expansion obtained symbolically, with the remainder term modeled as a neural network. The key insight is that the remainder term can itself be modeled as a solution to a first-order ODE. We show how the use of these higher order PINNs can improve accuracy using interesting, but challenging ODE benchmarks. We also show that the resulting model can be quite useful for situations such as controlling uncertain physical systems modeled as ODEs. 
    more » « less
  2. The physics-informed neural networks (PINNs) has been widely utilized to numerically approximate PDE problems. While PINNs has achieved good results in producing solutions for many partial differential equations, studies have shown that it does not perform well on phase field models. In this paper, we partially address this issue by introducing a modified physics-informed neural networks. In particular, they are used to numerically approximate Allen- Cahn-Ohta-Kawasaki (ACOK) equation with a volume constraint. Technically, the inverse of Laplacian in the ACOK model presents many challenges to the baseline PINNs. To take the zero- mean condition of the inverse of Laplacian, as well as the volume constraint, into consideration, we also introduce a separate neural network, which takes the second set of sampling points in the approximation process. Numerical results are shown to demonstrate the effectiveness of the modified PINNs. An additional benefit of this research is that the modified PINNs can also be applied to learn more general nonlocal phase-field models, even with an unknown nonlocal kernel. 
    more » « less
  3. In this paper, we present a novel approach for fluid dynamic simulations by leveraging the capabilities of Physics-Informed Neural Networks (PINNs) guided by the newly unveiled Principle of Minimum Pressure Gradient (PMPG). In a PINN formulation, the physics problem is converted into a minimization problem (typically least squares). The PMPG asserts that for incompressible flows, the total magnitude of the pressure gradient over the domain must be minimum at every time instant, turning fluid mechanics into minimization problems, making it an excellent choice for PINNs formulation. Following the PMPG, the proposed PINN formulation seeks to construct a neural network for the flow field that minimizes Nature's cost function for incompressible flows in contrast to traditional PINNs that minimize the residuals of the Navier–Stokes equations. This technique eliminates the need to train a separate pressure model, thereby reducing training time and computational costs. We demonstrate the effectiveness of this approach through a case study of inviscid flow around a cylinder. The proposed approach outperforms the traditional PINNs approach in terms of training time, convergence rate, and compliance with physical metrics. While demonstrated on a simple geometry, the methodology is extensible to more complex flow fields (e.g., three-dimensional, unsteady, and viscous flows) within the incompressible realm, which is the region of applicability of the PMPG. 
    more » « less
  4. Physics-guided Neural Networks (PGNNs) represent an emerging class of neural networks that are trained using physics-guided (PG) loss functions (capturing violations in network outputs with known physics), along with the supervision contained in data. Existing work in PGNNs has demonstrated the efficacy of adding single PG loss functions in the neural network objectives, using constant trade-off parameters, to ensure better generalizability. However, in the presence of multiple PG functions with competing gradient directions, there is a need to adaptively tune the contribution of different PG loss functions during the course of training to arrive at generalizable solutions. We demonstrate the presence of competing PG losses in the generic neural network problem of solving for the lowest (or highest) eigenvector of a physics-based eigenvalue equation, which is commonly encountered in many scientific problems. We present a novel approach to handle competing PG losses and demonstrate its efficacy in learning generalizable solutions in two motivating applications of quantum mechanics and electromagnetic propagation. All the code and data used in this work are available at https://github.com/jayroxis/Cophy-PGNN. 
    more » « less
  5. The integration of machine learning in power systems, particularly in stability and dynamics, addresses the challenges brought by the integration of renewable energies and distributed energy resources (DERs). Traditional methods for power system transient stability, involving solving differential equations with computational techniques, face limitations due to their time-consuming and computationally demanding nature. This paper introduces physics-informed Neural Networks (PINNs) as a promising solution for these challenges, especially in scenarios with limited data availability and the need for high computational speed. PINNs offer a novel approach for complex power systems by incorporating additional equations and adapting to various system scales, from a single bus to multi-bus networks. Our study presents the first comprehensive evaluation of physics-informed Neural Networks (PINNs) in the context of power system transient stability, addressing various grid complexities. Additionally, we introduce a novel approach for adjusting loss weights to improve the adaptability of PINNs to diverse systems. Our experimental findings reveal that PINNs can be efficiently scaled while maintaining high accuracy. Furthermore, these results suggest that PINNs significantly outperform the traditional ode45 method in terms of efficiency, especially as the system size increases, showcasing a progressive speed advantage over ode45. 
    more » « less