The integration of machine learning in power systems, particularly in stability and dynamics, addresses the challenges brought by the integration of renewable energies and distributed energy resources (DERs). Traditional methods for power system transient stability, involving solving differential equations with computational techniques, face limitations due to their time-consuming and computationally demanding nature. This paper introduces physics-informed Neural Networks (PINNs) as a promising solution for these challenges, especially in scenarios with limited data availability and the need for high computational speed. PINNs offer a novel approach for complex power systems by incorporating additional equations and adapting to various system scales, from a single bus to multi-bus networks. Our study presents the first comprehensive evaluation of physics-informed Neural Networks (PINNs) in the context of power system transient stability, addressing various grid complexities. Additionally, we introduce a novel approach for adjusting loss weights to improve the adaptability of PINNs to diverse systems. Our experimental findings reveal that PINNs can be efficiently scaled while maintaining high accuracy. Furthermore, these results suggest that PINNs significantly outperform the traditional ode45 method in terms of efficiency, especially as the system size increases, showcasing a progressive speed advantage over ode45.
more »
« less
This content will become publicly available on July 1, 2025
Temporal consistency loss for physics-informed neural networks
Physics-informed neural networks (PINNs) have been widely used to solve partial differential equations (PDEs) in a forward and inverse manner using neural networks. However, balancing individual loss terms can be challenging, mainly when training these networks for stiff PDEs and scenarios requiring enforcement of numerous constraints. Even though statistical methods can be applied to assign relative weights to the regression loss for data, assigning relative weights to equation-based loss terms remains a formidable task. This paper proposes a method for assigning relative weights to the mean squared loss terms in the objective function used to train PINNs. Due to the presence of temporal gradients in the governing equation, the physics-informed loss can be recast using numerical integration through backward Euler discretization. The physics-uninformed and physics-informed networks should yield identical predictions when assessed at corresponding spatiotemporal positions. We refer to this consistency as “temporal consistency.” This approach introduces a unique method for training physics-informed neural networks (PINNs), redefining the loss function to allow for assigning relative weights with statistical properties of the observed data. In this work, we consider the two- and three-dimensional Navier–Stokes equations and determine the kinematic viscosity using the spatiotemporal data on the velocity and pressure fields. We consider numerical datasets to test our method. We test the sensitivity of our method to the timestep size, the number of timesteps, noise in the data, and spatial resolution. Finally, we use the velocity field obtained using particle image velocimetry experiments to generate a reference pressure field and test our framework using the velocity and pressure fields.
more »
« less
- Award ID(s):
- 2141404
- PAR ID:
- 10565206
- Publisher / Repository:
- AIP
- Date Published:
- Journal Name:
- Physics of Fluids
- Volume:
- 36
- Issue:
- 7
- ISSN:
- 1070-6631
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Melt pool dynamics in metal additive manufacturing (AM) is critical to process stability, microstructure formation, and final properties of the printed materials. Physics-based simulation, including computational fluid dynamics (CFD), is the dominant approach to predict melt pool dynamics. However, the physics-based simulation approaches suffer from the inherent issue of very high computational cost. This paper provides a physics-informed machine learning method by integrating the conventional neural networks with the governing physical laws to predict the melt pool dynamics, such as temperature, velocity, and pressure, without using any training data on velocity and pressure. This approach avoids solving the nonlinear Navier–Stokes equation numerically, which significantly reduces the computational cost (if including the cost of velocity data generation). The difficult-to-determine parameters' values of the governing equations can also be inferred through data-driven discovery. In addition, the physics-informed neural network (PINN) architecture has been optimized for efficient model training. The data-efficient PINN model is attributed to the extra penalty by incorporating governing PDEs, initial conditions, and boundary conditions in the PINN model.more » « less
-
In this paper, we apply a machine-learning approach to learn traveling solitary waves across various physical systems that are described by families of partial differential equations (PDEs). Our approach integrates a novel interpretable neural network (NN) architecture, called Separable Gaussian Neural Networks (SGNN) into the framework of Physics-Informed Neural Networks (PINNs). Unlike the traditional PINNs that treat spatial and temporal data as independent inputs, the present method leverages wave characteristics to transform data into the so-called co-traveling wave frame. This reformulation effectively addresses the issue of propagation failure in PINNs when applied to large computational domains. Here, the SGNN architecture demonstrates robust approximation capabilities for single-peakon, multi-peakon, and stationary solutions (known as “leftons”) within the (1+1)-dimensional, b-family of PDEs. In addition, we expand our investigations, and explore not only peakon solutions in the ab-family but also compacton solutions in (2+1)-dimensional, Rosenau-Hyman family of PDEs. A comparative analysis with multi-layer perceptron (MLP) reveals that SGNN achieves comparable accuracy with fewer than a tenth of the neurons, underscoring its efficiency and potential for broader application in solving complex nonlinear PDEs.more » « less
-
Abstract Traditional data-driven deep learning models often struggle with high training costs, error accumulation, and poor generalizability in complex physical processes. Physics-informed deep learning (PiDL) addresses these challenges by incorporating physical principles into the model. Most PiDL approaches regularize training by embedding governing equations into the loss function, yet this depends heavily on extensive hyperparameter tuning to weigh each loss term. To this end, we propose to leverage physics prior knowledge by “baking” the discretized governing equations into the neural network architecture via the connection between the partial differential equations (PDE) operators and network structures, resulting in a PDE-preserved neural network (PPNN). This method, embedding discretized PDEs through convolutional residual networks in a multi-resolution setting, largely improves the generalizability and long-term prediction accuracy, outperforming conventional black-box models. The effectiveness and merit of the proposed methods have been demonstrated across various spatiotemporal dynamical systems governed by spatiotemporal PDEs, including reaction-diffusion, Burgers’, and Navier-Stokes equations.more » « less
-
This paper proposes a scalable learning framework to solve a system of coupled forward–backward partial differential equations (PDEs) arising from mean field games (MFGs). The MFG system incorporates a forward PDE to model the propagation of population dynamics and a backward PDE for a representative agent’s optimal control. Existing work mainly focus on solving the mean field game equilibrium (MFE) of the MFG system when given fixed boundary conditions, including the initial population state and terminal cost. To obtain MFE efficiently, particularly when the initial population density and terminal cost vary, we utilize a physics-informed neural operator (PINO) to tackle the forward–backward PDEs. A learning algorithm is devised and its performance is evaluated on one application domain, which is the autonomous driving velocity control. Numerical experiments show that our method can obtain the MFE accurately when given different initial distributions of vehicles. The PINO exhibits both memory efficiency and generalization capabilities compared to physics-informed neural networks (PINNs).more » « less