This work presents a two-stage adaptive framework for progressively developing deep neural network (DNN) architectures that generalize well for a given training dataset. In the first stage, a layerwise training approach is adopted where a new layer is added each time and trained independently by freezing parameters in the previous layers. We impose desirable structures on the DNN by employing manifold regularization, sparsity regularization, and physics-informed terms. We introduce a $$\ epsilon-\delta$$ stability-promoting concept as a desirable property for a learning algorithm and show that employing manifold regularization yields a $$\epsilon-\delta$$ stability-promoting algorithm. Further, we also derive the necessary conditions for the trainability of a newly added layer and investigate the training saturation problem. In the second stage of the algorithm (post-processing), a sequence of shallow networks is employed to extract information from the residual produced in the first stage, thereby improving the prediction accuracy. Numerical investigations on prototype regression and classification problems demonstrate that the proposed approach can outperform fully connected DNNs of the same size. Moreover, by equipping the physics-informed neural network (PINN) with the proposed adaptive architecture strategy to solve partial differential equations, we numerically show that adaptive PINNs not only are superior to standard PINNs but also produce interpretable hidden layers with provable stability. We also apply our architecture design strategy to solve inverse problems governed by elliptic partial differential equations.
more »
« less
Learning Tensor Representations to Improve Quality of Wavefield Data
Recent advancements in physics-informed machine learning have contributed to solving partial differential equations through means of a neural network. Following this, several physics-informed neural network works have followed to solve inverse problems arising in structural health monitoring. Other works involving physics-informed neural networks solve the wave equation with partial data and modeling wavefield data generator for efficient sound data generation. While a lot of work has been done to show that partial differential equations can be solved and identified using a neural network, little work has been done the same with more basic machine learning (ML) models. The advantage with basic ML models is that the parameters learned in a simpler model are both more interpretable and extensible. For applications such as ultrasonic nondestructive evaluation, this interpretability is essential for trustworthiness of the methods and characterization of the material system under test. In this work, we show an interpretable, physics-informed representation learning framework that can analyze data across multiple dimensions (e.g., two dimensions of space and one dimension of time). The algorithm comes with convergence guarantees. In addition, our algorithm provides interpretability of the learned model as the parameters correspond to the individual solutions extracted from data. We demonstrate how this algorithm functions with wavefield videos.
more »
« less
- Award ID(s):
- 1747783
- PAR ID:
- 10488292
- Publisher / Repository:
- American Society of Mechanical Engineers
- Date Published:
- Journal Name:
- Proc. of the Review of Quantitative Nondestructive Evaluation
- Volume:
- 87202
- ISBN:
- 978-0-7918-8720-2
- Page Range / eLocation ID:
- V001T05A002
- Format(s):
- Medium: X
- Location:
- Austin, Texas, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The inception of physics-constrained or physics-informed machine learning represents a paradigm shift, addressing the challenges associated with data scarcity and enhancing model interpretability. This innovative approach incorporates the fundamental laws of physics as constraints, guiding the training process of machine learning models. In this work, the physics-constrained convolutional recurrent neural network is further extended for solving spatial-temporal partial differential equations with arbitrary boundary conditions. Two notable advancements are introduced: the implementation of boundary conditions as soft constraints through finite difference-based differentiation, and the establishment of an adaptive weighting mechanism for the optimal allocation of weights to various losses. These enhancements significantly augment the network's ability to manage intricate boundary conditions and expedite the training process. The efficacy of the proposed model is validated through its application to two-dimensional phase transition, fluid dynamics, and reaction-diffusion problems, which are pivotal in materials modeling. Compared to traditional physics-constrained neural networks, the physics-constrained convolutional recurrent neural network demonstrates a tenfold increase in prediction accuracy within a similar computational budget. Moreover, the model's exceptional performance in extrapolating solutions for the Burgers' equation underscores its utility. Therefore, this research establishes the physics-constrained recurrent neural network as a viable surrogate model for sophisticated spatial-temporal PDE systems, particularly beneficial in scenarios plagued by sparse and noisy datasets.more » « less
-
Abstract Harnessing data to discover the underlying governing laws or equations that describe the behavior of complex physical systems can significantly advance our modeling, simulation and understanding of such systems in various science and engineering disciplines. This work introduces a novel approach called physics-informed neural network with sparse regression to discover governing partial differential equations from scarce and noisy data for nonlinear spatiotemporal systems. In particular, this discovery approach seamlessly integrates the strengths of deep neural networks for rich representation learning, physics embedding, automatic differentiation and sparse regression to approximate the solution of system variables, compute essential derivatives, as well as identify the key derivative terms and parameters that form the structure and explicit expression of the equations. The efficacy and robustness of this method are demonstrated, both numerically and experimentally, on discovering a variety of partial differential equation systems with different levels of data scarcity and noise accounting for different initial/boundary conditions. The resulting computational framework shows the potential for closed-form model discovery in practical applications where large and accurate datasets are intractable to capture.more » « less
-
Modern machine learning has been on the rise in many scientific domains, such as acoustics. Many scientific problems face challenges with limited data, which prevent the use of the many powerful machine learning strategies. In response, the physics of wave-propagation can be exploited to reduce the amount of data necessary and improve performance of machine learning techniques. Based on this need, we present a physics-informed machine learning framework, known as wave-informed regression, to extract dispersion curves from a guided wave wavefield data from non-homogeneous media. Wave-informed regression blends matrix factorization with known wave-physics by borrowing results from optimization theory. We briefly derive the algorithm and discuss a signal processing-based interpretability aspect of it, which aids in extracting dispersion curves for non-homogenous media. We show our results on a non-homogeneous media, where the dispersion curves change as a function of space. We demonstrate our ability to use wave-informed regression to extract spatially local dispersion curves.more » « less
-
Accurate representations of unknown and sub-grid physical processes through parameterizations (or closure) in numerical simulations with quantified uncertainty are critical for resolving the coarse-grained partial differential equations that govern many problems ranging from weather and climate prediction to turbulence simulations. Recent advances have seen machine learning (ML) increasingly applied to model these subgrid processes, resulting in the development of hybrid physics-ML models through the integration with numerical solvers. In this work, we introduce a novel framework for the joint estimation and uncertainty quantification of physical parameters and machine learning parameterizations in tandem, leveraging differentiable programming. Achieved through online training and efficient Bayesian inference within a high-dimensional parameter space, this approach is enabled by the capabilities of differentiable programming. This proof of concept underscores the substantial potential of differentiable programming in synergistically combining machine learning with differential equations, thereby enhancing the capabilities of hybrid physics-ML modeling.more » « less
An official website of the United States government

