skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A combination of physics-informed neural networks with the fixed-stress splitting iteration for solving Biot's model
Biot's consolidation model in poroelasticity describes the interaction between the fluid and the deformable porous structure. Based on the fixed-stress splitting iterative method proposed by Mikelic et al. (Computat Geosci, 2013), we present a network approach to solve Biot's consolidation model using physics-informed neural networks (PINNs). Methods Two independent and small neural networks are used to solve the displacement and pressure variables separately. Accordingly, separate loss functions are proposed, and the fixed stress splitting iterative algorithm is used to couple these variables. Error analysis is provided to support the capability of the proposed fixed-stress splitting-based PINNs (FS-PINNs). Results Several numerical experiments are performed to evaluate the effectiveness and accuracy of our approach, including the pure Dirichlet problem, the mixed partial Neumann and partial Dirichlet problem, and the Barry-Mercer's problem. The performance of FS-PINNs is superior to traditional PINNs, demonstrating the effectiveness of our approach. Discussion Our study highlights the successful application of PINNs with the fixed-stress splitting iterative method to tackle Biot's model. The ability to use independent neural networks for displacement and pressure offers computational advantages while maintaining accuracy. The proposed approach shows promising potential for solving other similar geoscientific problems.  more » « less
Award ID(s):
2228010 1831950
PAR ID:
10493535
Author(s) / Creator(s):
; ; ;
Editor(s):
Haizhao Yang
Publisher / Repository:
Frontiers
Date Published:
Journal Name:
Frontiers in Applied Mathematics and Statistics
Volume:
9
ISSN:
2297-4687
Subject(s) / Keyword(s):
physics-informed neural networks the fixed-stress method Biot's model iterative algorithm separated networks.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we present a novel approach for fluid dynamic simulations by leveraging the capabilities of Physics-Informed Neural Networks (PINNs) guided by the newly unveiled Principle of Minimum Pressure Gradient (PMPG). In a PINN formulation, the physics problem is converted into a minimization problem (typically least squares). The PMPG asserts that for incompressible flows, the total magnitude of the pressure gradient over the domain must be minimum at every time instant, turning fluid mechanics into minimization problems, making it an excellent choice for PINNs formulation. Following the PMPG, the proposed PINN formulation seeks to construct a neural network for the flow field that minimizes Nature's cost function for incompressible flows in contrast to traditional PINNs that minimize the residuals of the Navier–Stokes equations. This technique eliminates the need to train a separate pressure model, thereby reducing training time and computational costs. We demonstrate the effectiveness of this approach through a case study of inviscid flow around a cylinder. The proposed approach outperforms the traditional PINNs approach in terms of training time, convergence rate, and compliance with physical metrics. While demonstrated on a simple geometry, the methodology is extensible to more complex flow fields (e.g., three-dimensional, unsteady, and viscous flows) within the incompressible realm, which is the region of applicability of the PMPG. 
    more » « less
  2. This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training dataset. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a $$\ epsilon-\delta$$ stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a $$\epsilon-\delta$$ stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations. 
    more » « less
  3. This paper explores an iterative approach to solve linear thermo-poroelasticity problems, with its application as a high-fidelity discretization utilizing finite elements during the training of projection-based reduced order models. One of the main challenges in addressing coupled multi-physics problems is the complexity and computational expenses involved. In this study, we introduce a decoupled iterative solution approach, integrated with reduced order modeling, aimed at augmenting the efficiency of the computational algorithm. The iterative technique we employ builds upon the established fixed-stress splitting scheme that has been extensively investigated for Biot’s poroelasticity. By leveraging solutions derived from this coupled iterative scheme, the reduced order model employs an additional Galerkin projection onto a reduced basis space formed by a small number of modes obtained through proper orthogonal decomposition. The effectiveness of the proposed algorithm is demonstrated through numerical experiments, showcasing its computational prowess. 
    more » « less
  4. Pappas, George; Ravikumar, Pradeep; Seshia, Sanjit A (Ed.)
    We study the problem of learning neural network models for Ordinary Differential Equations (ODEs) with parametric uncertainties. Such neural network models capture the solution to the ODE over a given set of parameters, initial conditions, and range of times. Physics-Informed Neural Networks (PINNs) have emerged as a promising approach for learning such models that combine data-driven deep learning with symbolic physics models in a principled manner. However, the accuracy of PINNs degrade when they are used to solve an entire family of initial value problems characterized by varying parameters and initial conditions. In this paper, we combine symbolic differentiation and Taylor series methods to propose a class of higher-order models for capturing the solutions to ODEs. These models combine neural networks and symbolic terms: they use higher order Lie derivatives and a Taylor series expansion obtained symbolically, with the remainder term modeled as a neural network. The key insight is that the remainder term can itself be modeled as a solution to a first-order ODE. We show how the use of these higher order PINNs can improve accuracy using interesting, but challenging ODE benchmarks. We also show that the resulting model can be quite useful for situations such as controlling uncertain physical systems modeled as ODEs. 
    more » « less
  5. Physics-informed neural networks (PINNs) have been widely used to solve partial differential equations (PDEs) in a forward and inverse manner using neural networks. However, balancing individual loss terms can be challenging, mainly when training these networks for stiff PDEs and scenarios requiring enforcement of numerous constraints. Even though statistical methods can be applied to assign relative weights to the regression loss for data, assigning relative weights to equation-based loss terms remains a formidable task. This paper proposes a method for assigning relative weights to the mean squared loss terms in the objective function used to train PINNs. Due to the presence of temporal gradients in the governing equation, the physics-informed loss can be recast using numerical integration through backward Euler discretization. The physics-uninformed and physics-informed networks should yield identical predictions when assessed at corresponding spatiotemporal positions. We refer to this consistency as “temporal consistency.” This approach introduces a unique method for training physics-informed neural networks (PINNs), redefining the loss function to allow for assigning relative weights with statistical properties of the observed data. In this work, we consider the two- and three-dimensional Navier–Stokes equations and determine the kinematic viscosity using the spatiotemporal data on the velocity and pressure fields. We consider numerical datasets to test our method. We test the sensitivity of our method to the timestep size, the number of timesteps, noise in the data, and spatial resolution. Finally, we use the velocity field obtained using particle image velocimetry experiments to generate a reference pressure field and test our framework using the velocity and pressure fields. 
    more » « less