skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Multi-Objective PSO-PINN
PSO-PINN is a class of algorithms for training physics-informed neural networks (PINN) using particle swarm optimization (PSO). PSO-PINN can mitigate the well-known difficulties presented by gradient descent training of PINNs when dealing with PDEs with irregular solutions. Additionally, PSO-PINN is an ensemble approach to PINN that yields reproducible predictions with quantified uncertainty. In this paper, we introduce Multi-Objective PSO-PINN, which treats PINN training as a multi-objective problem. The proposed multi-objective PSO-PINN represents a new paradigm in PINN training, which thus far has relied on scalarizations of the multi-objective loss function. A full multi-objective approach allows on-the-fly compromises in the trade-off among the various components of the PINN loss function. Experimental results with a diffusion PDE problem demonstrate the promise of this methodology.  more » « less
Award ID(s):
2225507
PAR ID:
10477152
Author(s) / Creator(s):
;
Publisher / Repository:
1st workshop on Synergy of Scientific and Machine Learning Modeling, SynS & ML - ICML, Honolulu, Hawaii, USA. July, 2023.
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a new deep learning method for the nonlinear Poisson-Boltzmann problems with applications in computational biology. To tackle the discontinuity of the solution, e.g., across protein surfaces, we approximate the solution by a piecewise mesh-free neural network that can capture the dramatic change in the solution across the interface. The partial differential equation problem is first reformulated as a least-squares physics-informed neural network (PINN)-type problem and then discretized to an objective function using mean squared error via sampling. The solution is obtained by minimizing the designed objective function via standard training algorithms such as the stochastic gradient descent method. Finally, the effectiveness and efficiency of the neural network are validated using complex protein interfaces on various manufactured functions with different frequencies. 
    more » « less
  2. We introduce the Discrete-Temporal Sobolev Network (DTSN), a neural network loss function that assists dynamical system forecasting by minimizing variational differences between the network output and the training data via a temporal Sobolev norm. This approach is entirely data-driven, architecture agnostic, and does not require derivative information from the estimated system. The DTSN is particularly well suited to chaotic dynamical systems as it minimizes noise in the network output which is crucial for such sensitive systems. For our test cases we consider discrete approximations of the Lorenz-63 system and the Chua circuit. For the network architectures we use the Long Short-Term Memory (LSTM) and the Transformer. The performance of the DTSN is compared with the standard MSE loss for both architectures, as well as with the Physics Informed Neural Network (PINN) loss for the LSTM. The DTSN loss is shown to substantially improve accuracy for both architectures, while requiring less information than the PINN and without noticeably increasing computational time, thereby demonstrating its potential to improve neural network forecasting of dynamical systems. 
    more » « less
  3. In this paper, we present a novel approach for fluid dynamic simulations by leveraging the capabilities of Physics-Informed Neural Networks (PINNs) guided by the newly unveiled Principle of Minimum Pressure Gradient (PMPG). In a PINN formulation, the physics problem is converted into a minimization problem (typically least squares). The PMPG asserts that for incompressible flows, the total magnitude of the pressure gradient over the domain must be minimum at every time instant, turning fluid mechanics into minimization problems, making it an excellent choice for PINNs formulation. Following the PMPG, the proposed PINN formulation seeks to construct a neural network for the flow field that minimizes Nature's cost function for incompressible flows in contrast to traditional PINNs that minimize the residuals of the Navier–Stokes equations. This technique eliminates the need to train a separate pressure model, thereby reducing training time and computational costs. We demonstrate the effectiveness of this approach through a case study of inviscid flow around a cylinder. The proposed approach outperforms the traditional PINNs approach in terms of training time, convergence rate, and compliance with physical metrics. While demonstrated on a simple geometry, the methodology is extensible to more complex flow fields (e.g., three-dimensional, unsteady, and viscous flows) within the incompressible realm, which is the region of applicability of the PMPG. 
    more » « less
  4. We consider the problem of estimating differences in two multi-attribute Gaussian graphical models (GGMs) which are known to have similar structure, using a penalized D-trace loss function with nonconvex penalties. The GGM structure is encoded in its precision (inverse covariance) matrix. Existing methods for multi-attribute differential graph estimation are based on a group lasso penalized loss function. In this paper, we consider a penalized D-trace loss function with nonconvex (log-sum and smoothly clipped absolute deviation (SCAD)) penalties. Two proximal gradient descent methods are presented to optimize the objective function. Theoretical analysis establishing local consistency in support recovery, local convexity and estimation in high-dimensional settings is provided. We illustrate our approach with a numerical example. 
    more » « less
  5. Abstract Predicting failure in solids has broad applications including earthquake prediction which remains an unattainable goal. However, recent machine learning work shows that laboratory earthquakes can be predicted using micro-failure events and temporal evolution of fault zone elastic properties. Remarkably, these results come from purely data-driven models trained with large datasets. Such data are equivalent to centuries of fault motion rendering application to tectonic faulting unclear. In addition, the underlying physics of such predictions is poorly understood. Here, we address scalability using a novel Physics-Informed Neural Network (PINN). Our model encodes fault physics in the deep learning loss function using time-lapse ultrasonic data. PINN models outperform data-driven models and significantly improve transfer learning for small training datasets and conditions outside those used in training. Our work suggests that PINN offers a promising path for machine learning-based failure prediction and, ultimately for improving our understanding of earthquake physics and prediction. 
    more » « less