skip to main content


Title: On Bayesian data assimilation for PDEs with ill-posed forward problems
Abstract We study Bayesian data assimilation (filtering) for time-evolution Partial differential equations (PDEs), for which the underlying forward problem may be very unstable or ill-posed. Such PDEs, which include the Navier–Stokes equations of fluid dynamics, are characterized by a high sensitivity of solutions to perturbations of the initial data, a lack of rigorous global well-posedness results as well as possible non-convergence of numerical approximations. Under very mild and readily verifiable general hypotheses on the forward solution operator of such PDEs, we prove that the posterior measure expressing the solution of the Bayesian filtering problem is stable with respect to perturbations of the noisy measurements, and we provide quantitative estimates on the convergence of approximate Bayesian filtering distributions computed from numerical approximations. For the Navier–Stokes equations, our results imply uniform stability of the filtering problem even at arbitrarily small viscosity, when the underlying forward problem may become ill-posed, as well as the compactness of numerical approximants in a suitable metric on time-parametrized probability measures.  more » « less
Award ID(s):
2042454
NSF-PAR ID:
10341609
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Inverse Problems
Volume:
38
Issue:
8
ISSN:
0266-5611
Page Range / eLocation ID:
085012
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For the inverse problem in physical models, one measures the solution and infers the model parameters using information from the collected data. Oftentimes, these data are inadequate and render the inverse problem ill-posed. We study the ill-posedness in the context of optical imaging, which is a medical imaging technique that uses light to probe (bio-)tissue structure. Depending on the intensity of the light, the forward problem can be described by different types of equations. High-energy light scatters very little, and one uses the radiative transfer equation (RTE) as the model; low-energy light scatters frequently, so the diffusion equation (DE) suffices to be a good approximation. A multiscale approximation links the hyperbolic-type RTE with the parabolic-type DE. The inverse problems for the two equations have a multiscale passage as well, so one expects that as the energy of the photons diminishes, the inverse problem changes from well- to ill-posed. We study this stability deterioration using the Bayesian inference. In particular, we use the Kullback–Leibler divergence between the prior distribution and the posterior distribution based on the RTE to prove that the information gain from the measurement vanishes as the energy of the photons decreases, so that the inverse problem is ill-posed in the diffusive regime. In the linearized setting, we also show that the mean square error of the posterior distribution increases as we approach the diffusive regime. 
    more » « less
  2. In recent years, large convolutional neural networks have been widely used as tools for image deblurring, because of their ability in restoring images very precisely. It is well known that image deblurring is mathematically modeled as an ill-posed inverse problem and its solution is difficult to approximate when noise affects the data. Really, one limitation of neural networks for deblurring is their sensitivity to noise and other perturbations, which can lead to instability and produce poor reconstructions. In addition, networks do not necessarily take into account the numerical formulation of the underlying imaging problem when trained end-to-end. In this paper, we propose some strategies to improve stability without losing too much accuracy to deblur images with deep-learning-based methods. First, we suggest a very small neural architecture, which reduces the execution time for training, satisfying a green AI need, and does not extremely amplify noise in the computed image. Second, we introduce a unified framework where a pre-processing step balances the lack of stability of the following neural-network-based step. Two different pre-processors are presented. The former implements a strong parameter-free denoiser, and the latter is a variational-model-based regularized formulation of the latent imaging problem. This framework is also formally characterized by mathematical analysis. Numerical experiments are performed to verify the accuracy and stability of the proposed approaches for image deblurring when unknown or not-quantified noise is present; the results confirm that they improve the network stability with respect to noise. In particular, the model-based framework represents the most reliable trade-off between visual precision and robustness. 
    more » « less
  3. Abstract The goal of this study is to develop a new computed tomography (CT) image reconstruction method, aiming at improving the quality of the reconstructed images of existing methods while reducing computational costs. Existing CT reconstruction is modeled by pixel-based piecewise constant approximations of the integral equation that describes the CT projection data acquisition process. Using these approximations imposes a bottleneck model error and results in a discrete system of a large size. We propose to develop a content-adaptive unstructured grid (CAUG) based regularized CT reconstruction method to address these issues. Specifically, we design a CAUG of the image domain to sparsely represent the underlying image, and introduce a CAUG-based piecewise linear approximation of the integral equation by employing a collocation method. We further apply a regularization defined on the CAUG for the resulting ill-posed linear system, which may lead to a sparse linear representation for the underlying solution. The regularized CT reconstruction is formulated as a convex optimization problem, whose objective function consists of a weighted least square norm based fidelity term, a regularization term and a constraint term. Here, the corresponding weighted matrix is derived from the simultaneous algebraic reconstruction technique (SART). We then develop a SART-type preconditioned fixed-point proximity algorithm to solve the optimization problem. Convergence analysis is provided for the resulting iterative algorithm. Numerical experiments demonstrate the superiority of the proposed method over several existing methods in terms of both suppressing noise and reducing computational costs. These methods include the SART without regularization and with the quadratic regularization, the traditional total variation (TV) regularized reconstruction method and the TV superiorized conjugate gradient method on the pixel grid. 
    more » « less
  4. Discovering governing physical laws from noisy data is a grand challenge in many science and engineering research areas. We present a new approach to data-driven discovery of ordinary differential equations (ODEs) and partial differential equations (PDEs), in explicit or implicit form. We demonstrate our approach on a wide range of problems, including shallow water equations and Navier–Stokes equations. The key idea is to select candidate terms for the underlying equations using dimensional analysis, and to approximate the weights of the terms with error bars using our threshold sparse Bayesian regression. This new algorithm employs Bayesian inference to tune the hyperparameters automatically. Our approach is effective, robust and able to quantify uncertainties by providing an error bar for each discovered candidate equation. The effectiveness of our algorithm is demonstrated through a collection of classical ODEs and PDEs. Numerical experiments demonstrate the robustness of our algorithm with respect to noisy data and its ability to discover various candidate equations with error bars that represent the quantified uncertainties. Detailed comparisons with the sequential threshold least-squares algorithm and the lasso algorithm are studied from noisy time-series measurements and indicate that the proposed method provides more robust and accurate results. In addition, the data-driven prediction of dynamics with error bars using discovered governing physical laws is more accurate and robust than classical polynomial regressions. 
    more » « less
  5. Whether the 3D incompressible Navier–Stokes equations can develop a finite time sin- gularity from smooth initial data is one of the most challenging problems in nonlinear PDEs. In this paper, we present some new numerical evidence that the incompress- ible axisymmetric Navier–Stokes equations with smooth initial data of finite energy seem to develop potentially singular behavior at the origin. This potentially singular behavior is induced by a potential finite time singularity of the 3D Euler equations that we reported in a companion paper published in the same issue, see also Hou (Poten- tial singularity of the 3D Euler equations in the interior domain. arXiv:2107.05870 [math.AP], 2021). We present numerical evidence that the 3D Navier–Stokes equa- tions develop nearly self-similar singular scaling properties with maximum vorticity increased by a factor of 107. We have applied several blow-up criteria to study the potentially singular behavior of the Navier–Stokes equations. The Beale–Kato–Majda blow-up criterion and the blow-up criteria based on the growth of enstrophy and neg- ative pressure seem to imply that the Navier–Stokes equations using our initial data develop a potential finite time singularity. We have also examined the Ladyzhenskaya– Prodi–Serrin regularity criteria (Kiselev and Ladyzhenskaya in Izv Akad Nauk SSSR Ser Mat 21(5):655–690, 1957; Prodi in Ann Math Pura Appl 4(48):173–182, 1959; Serrin in Arch Ration Mech Anal 9:187–191, 1962) that are based on the growth rate of Lqt Lxp norm of the velocity with 3/p + 2/q ≤ 1. Our numerical results for the cases of (p,q) = (4,8), (6,4), (9,3) and (p,q) = (∞,2) provide strong evidence for the potentially singular behavior of the Navier–Stokes equations. The critical case of (p,q) = (3,∞) is more difficult to verify numerically due to the extremely slow growth rate in the L3 norm of the velocity field and the significant contribution from the far field where we have a relatively coarse grid. Our numerical study shows that while the global L3 norm of the velocity grows very slowly, the localized version of the L 3 norm of the velocity experiences rapid dynamic growth relative to the localized L 3 norm of the initial velocity. This provides further evidence for the potentially singular behavior of the Navier–Stokes equations. 
    more » « less