skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2204618

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract In this article, we exploit the similarities between Tikhonov regularization and Bayesian hierarchical models to propose a regularization scheme that acts like a distributed Tikhonov regularization where the amount of regularization varies from component to component, and a computationally efficient numerical scheme that is suitable for large-scale problems. In the standard formulation, Tikhonov regularization compensates for the inherent ill-conditioning of linear inverse problems by augmenting the data fidelity term measuring the mismatch between the data and the model output with a scaled penalty functional. The selection of the scaling of the penalty functional is the core problem in Tikhonov regularization. If an estimate of the amount of noise in the data is available, a popular way is to use the Morozov discrepancy principle, stating that the scaling parameter should be chosen so as to guarantee that the norm of the data fitting error is approximately equal to the norm of the noise in the data. A too small value of the regularization parameter would yield a solution that fits to the noise (too weak regularization) while a too large value would lead to an excessive penalization of the solution (too strong regularization). In many applications, it would be preferable to apply distributed regularization, replacing the regularization scalar by a vector valued parameter, so as to allow different regularization for different components of the unknown, or for groups of them. Distributed Tikhonov-inspired regularization is particularly well suited when the data have significantly different sensitivity to different components, or to promote sparsity of the solution. The numerical scheme that we propose, while exploiting the Bayesian interpretation of the inverse problem and identifying the Tikhonov regularization with the maximum a posteriori estimation, requires no statistical tools. A clever combination of numerical linear algebra and numerical optimization tools makes the scheme computationally efficient and suitable for problems where the matrix is not explicitly available. Moreover, in the case of underdetermined problems, passing through the adjoint formulation in data space may lead to substantial reduction in computational complexity. 
    more » « less
  2. Abstract We consider inverse problems estimating distributed parameters from indirect noisy observations through discretization of continuum models described by partial differential or integral equations. It is well understood that errors arising from the discretization can be detrimental for ill-posed inverse problems, as discretization error behaves as correlated noise. While this problem can be avoided with a discretization fine enough to decrease the modeling error level below that of the exogenous noise that is addressed, e.g. by regularization, the computational resources needed to deal with the additional degrees of freedom may increase so much as to require high performance computing environments. Following an earlier idea, we advocate the notion of the discretization as one of the unknowns of the inverse problem, which is updated iteratively together with the solution. In this approach, the discretization, defined in terms of an underlying metric, is refined selectively only where the representation power of the current mesh is insufficient. In this paper we allow the metrics and meshes to be anisotropic, and we show that this leads to significant reduction of memory allocation and computing time. 
    more » « less
  3. Abstract Bayesian particle filters (PFs) are a viable alternative to sampling methods such as Markov chain Monte Carlo methods to estimate model parameters and related uncertainties when the forward model is a dynamical system, and the data are time series that depend on the state vector. PF techniques are particularly attractive when the dimensionality of the state space is large and the numerical solution of the dynamical system over the time interval corresponding to the data is time consuming. Moreover, information contained in the PF solution can be used to infer on the sensitivity of the unknown parameters to different temporal segments of the data. This, in turn, can guide the design of more efficient and effective data collection procedures. In this article the PF method is applied to the problem of estimating cell membrane permeability to gases from pH measurements on or near the cell membrane. The forward model in this case comprises a spatially distributed system of coupled reaction–diffusion differential equations. The high dimensionality of the state space and the need to account for the micro-environment created by the pH electrode measurement device are additional challenges that are addressed by the solution method. 
    more » « less
  4. Free, publicly-accessible full text available October 1, 2026
  5. Optical flow velocimetry (OFV) is a method for determining dense and accurate velocity fields from a pair of particle images by solving the classical optical flow problem. However, this is an ill-posed inverse problem, which generally entails minimizing a weighted sum of two terms–fidelity and regularization–and the weights in the sum are parameters that require manual tuning based on the properties of both the flow and the particle images. This manual tuning has historically been a consistent challenge that has limited the general applicability of OFV for experimental data, as the calculated velocity field is sensitive to the value of the weights. This work proposes a hierarchical model for the weighting parameters in the framework of a maximum a posteriori-based Bayesian optimization approach. The method replaces the classical Lagrange multiplier weighting parameter with a new, less-sensitive parameter that can be automatically predetermined from experimental images. The resulting method is tested on three different synthetic particle image velocimetry (PIV) datasets and on experimental particle images. The method is found to be capable of self-adjusting the local weights of the optimization process in real-time while simultaneously determining the velocity field, leading to an optimally regularized estimate of the velocity field without requiring any dataset-specific manual tuning of the parameters. The presented approach is the first truly general, parameter-free optical flow method for particle image velocimetry (PIV) images. The developed method is freely available as a part of the PIVlab package. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026