Abstract We study Bayesian data assimilation (filtering) for time-evolution Partial differential equations (PDEs), for which the underlying forward problem may be very unstable or ill-posed. Such PDEs, which include the Navier–Stokes equations of fluid dynamics, are characterized by a high sensitivity of solutions to perturbations of the initial data, a lack of rigorous global well-posedness results as well as possible non-convergence of numerical approximations. Under very mild and readily verifiable general hypotheses on the forward solution operator of such PDEs, we prove that the posterior measure expressing the solution of the Bayesian filtering problem is stable with respect to perturbations of the noisy measurements, and we provide quantitative estimates on the convergence of approximate Bayesian filtering distributions computed from numerical approximations. For the Navier–Stokes equations, our results imply uniform stability of the filtering problem even at arbitrarily small viscosity, when the underlying forward problem may become ill-posed, as well as the compactness of numerical approximants in a suitable metric on time-parametrized probability measures.
more »
« less
Fully nonlinear stochastic and rough PDEs: Classical and viscosity solutions
Abstract We study fully nonlinear second-order (forward) stochastic PDEs. They can also be viewed as forward path-dependent PDEs and will be treated as rough PDEs under a unified framework. For the most general fully nonlinear case, we develop a local theory of classical solutions and then define viscosity solutions through smooth test functions. Our notion of viscosity solutions is equivalent to the alternative using semi-jets. Next, we prove basic properties such as consistency, stability, and a partial comparison principle in the general setting. If the diffusion coefficient is semilinear (i.e, linear in the gradient of the solution and nonlinear in the solution; the drift can still be fully nonlinear), we establish a complete theory, including global existence and a comparison principle.
more »
« less
- Award ID(s):
- 1908665
- NSF-PAR ID:
- 10251943
- Date Published:
- Journal Name:
- Probability, Uncertainty and Quantitative Risk
- Volume:
- 5
- Issue:
- 1
- ISSN:
- 2367-0126
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Typical model reduction methods for parametric partial differential equations construct a linear space V n which approximates well the solution manifold M consisting of all solutions u ( y ) with y the vector of parameters. In many problems of numerical computation, nonlinear methods such as adaptive approximation, n -term approximation, and certain tree-based methods may provide improved numerical efficiency over linear methods. Nonlinear model reduction methods replace the linear space V n by a nonlinear space Σ n . Little is known in terms of their performance guarantees, and most existing numerical experiments use a parameter dimension of at most two. In this work, we make a step towards a more cohesive theory for nonlinear model reduction. Framing these methods in the general setting of library approximation, we give a first comparison of their performance with the performance of standard linear approximation for any compact set. We then study these methods for solution manifolds of parametrized elliptic PDEs. We study a specific example of library approximation where the parameter domain is split into a finite number N of rectangular cells, with affine spaces of dimension m assigned to each cell, and give performance guarantees with respect to accuracy of approximation versus m and N .more » « less
-
Despite the success of physics-informed neural networks (PINNs) in approximating partial differential equations (PDEs), PINNs can sometimes fail to converge to the correct solution in problems involving complicated PDEs. This is reflected in several recent studies on characterizing the "failure modes" of PINNs, although a thorough understanding of the connection between PINN failure modes and sampling strategies is missing. In this paper, we provide a novel perspective of failure modes of PINNs by hypothesizing that training PINNs relies on successful "propagation" of solution from initial and/or boundary condition points to interior points. We show that PINNs with poor sampling strategies can get stuck at trivial solutions if there are propagation failures, characterized by highly imbalanced PDE residual fields. To mitigate propagation failures, we propose a novel Retain-Resample-Release sampling (R3) algorithm that can incrementally accumulate collocation points in regions of high PDE residuals with little to no computational overhead. We provide an extension of R3 sampling to respect the principle of causality while solving timedependent PDEs. We theoretically analyze the behavior of R3 sampling and empirically demonstrate its efficacy and efficiency in comparison with baselines on a variety of PDE problems.more » « less
-
Despite the success of physics-informed neural networks (PINNs) in approximating partial differential equations (PDEs), PINNs can sometimes fail to converge to the correct solution in problems involving complicated PDEs. This is reflected in several recent studies on characterizing the "failure modes" of PINNs, although a thorough understanding of the connection between PINN failure modes and sampling strategies is missing. In this paper, we provide a novel perspective of failure modes of PINNs by hypothesizing that training PINNs relies on successful "propagation" of solution from initial and/or boundary condition points to interior points. We show that PINNs with poor sampling strategies can get stuck at trivial solutions if there are propagation failures, characterized by highly imbalanced PDE residual fields. To mitigate propagation failures, we propose a novel Retain-Resample-Release sampling (R3) algorithm that can incrementally accumulate collocation points in regions of high PDE residuals with little to no computational overhead. We provide an extension of R3 sampling to respect the principle of causality while solving timedependent PDEs. We theoretically analyze the behavior of R3 sampling and empirically demonstrate its efficacy and efficiency in comparison with baselines on a variety of PDE problems.more » « less