skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Friday, December 13 until 2:00 AM ET on Saturday, December 14 due to maintenance. We apologize for the inconvenience.


Search for: All records

Creators/Authors contains: "Villa, Umberto"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Rock strength has long been linked to lithospheric deformation and seismicity. However, independent constraints on the related elastic heterogeneity are missing, yet could provide key information for solid Earth dynamics. Using coseismic Global Navigation Satellite Systems (GNSS) data for the 2011 M9 Tohoku-oki earthquake in Japan, we apply an inverse method to infer elastic structure and fault slip simultaneously. We find compliant material beneath the volcanic arc and in the mantle wedge within the partial melt generation zone inferred to lie above ~100 km slab depth. We also identify low-rigidity material closer to the trench matching seismicity patterns, likely associated with accretionary wedge structure. Along with traditional seismic and electromagnetic methods, our approach opens up avenues for multiphysics inversions. Those have the potential to advance earthquake and volcano science, and in particular once expanded to InSAR type constraints, may lead to a better understanding of transient lithospheric deformation across scales.

     
    more » « less
    Free, publicly-accessible full text available June 7, 2025
  2. Ultrasound computed tomography (USCT) is one of the advanced imaging techniques used in structural health monitoring (SHM) and medical imaging due to its relatively low-cost, rapid data acquisition process. The time-domain full waveform inversion (TDFWI) method, an iterative inversion approach, has shown great promise in USCT. However, such an iterative process can be very time-consuming and computationally expensive but can be greatly accelerated by integrating an AI-based approach (e.g., convolution neural network (CNN)). Once trained, the CNN model takes low-iteration TDFWI images as input and instantaneously predicts material property distribution within the scanned region. Nevertheless, the quality of the reconstruction with the current CNN degrades with the increased complexity of material distributions. Another challenge is the availability of enough experimental data and, in some cases, even synthetic surrogate data. To alleviate these issues, this paper details a systematic study of the enhancement effect of a 2D CNN (U-Net) by improving the quality with limited training data. To achieve this, different augmentation schemes (flipping and mixing existing data) were implemented to increase the amount and complexity of the training datasets without generating a substantial number of samples. The objective was to evaluate the enhancement effect of these augmentation techniques on the performance of the U-Net model at FWI iterations. A thousand numerically built samples with acoustic material properties are used to construct multiple datasets from different FWI iterations. A parallelized, high-performance computing (HPC) based framework has been created to rapidly generate the training data. The prediction results were compared against the ground truth images using standard matrices, such as the structural similarity index measure (SSIM) and average mean square error (MSE). The results show that the increased number of samples from augmentations improves shape imaging of the complex regions even with a low iteration FWI training data.

     
    more » « less
  3. Bayesian inference provides a systematic framework for integration of data with mathematical models to quantify the uncertainty in the solution of the inverse problem. However, the solution of Bayesian inverse problems governed by complex forward models described by partial differential equations (PDEs) remains prohibitive with black-box Markov chain Monte Carlo (MCMC) methods. We present hIPPYlib-MUQ, an extensible and scalable software framework that contains implementations of state-of-the art algorithms aimed to overcome the challenges of high-dimensional, PDE-constrained Bayesian inverse problems. These algorithms accelerate MCMC sampling by exploiting the geometry and intrinsic low-dimensionality of parameter space via derivative information and low rank approximation. The software integrates two complementary open-source software packages, hIPPYlib and MUQ. hIPPYlib solves PDE-constrained inverse problems using automatically-generated adjoint-based derivatives, but it lacks full Bayesian capabilities. MUQ provides a spectrum of powerful Bayesian inversion models and algorithms, but expects forward models to come equipped with gradients and Hessians to permit large-scale solution. By combining these two complementary libraries, we created a robust, scalable, and efficient software framework that realizes the benefits of each and allows us to tackle complex large-scale Bayesian inverse problems across a broad spectrum of scientific and engineering disciplines. To illustrate the capabilities of hIPPYlib-MUQ, we present a comparison of a number of MCMC methods available in the integrated software on several high-dimensional Bayesian inverse problems. These include problems characterized by both linear and nonlinear PDEs, various noise models, and different parameter dimensions. The results demonstrate that large (∼ 50×) speedups over conventional black box and gradient-based MCMC algorithms can be obtained by exploiting Hessian information (from the log-posterior), underscoring the power of the integrated hIPPYlib-MUQ framework. 
    more » « less
  4. Abstract The replacement of a nonlinear parameter-to-observable mapping with a linear (affine) approximation is often carried out to reduce the computational costs associated with solving large-scale inverse problems governed by partial differential equations (PDEs). In the case of a linear parameter-to-observable mapping with normally distributed additive noise and a Gaussian prior measure on the parameters, the posterior is Gaussian. However, substituting an accurate model for a (possibly well justified) linear surrogate model can give misleading results if the induced model approximation error is not accounted for. To account for the errors, the Bayesian approximation error (BAE) approach can be utilised, in which the first and second order statistics of the errors are computed via sampling. The most common linear approximation is carried out via linear Taylor expansion, which requires the computation of (Fréchet) derivatives of the parameter-to-observable mapping with respect to the parameters of interest. In this paper, we prove that the (approximate) posterior measure obtained by replacing the nonlinear parameter-to-observable mapping with a linear approximation is in fact independent of the choice of the linear approximation when the BAE approach is employed. Thus, somewhat non-intuitively, employing the zero-model as the linear approximation gives the same approximate posterior as any other choice of linear approximations of the parameter-to-observable model. The independence of the linear approximation is demonstrated mathematically and illustrated with two numerical PDE-based problems: an inverse scattering type problem and an inverse conductivity type problem. 
    more » « less
  5. null (Ed.)
    We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms. 
    more » « less
  6. null (Ed.)
    Abstract. We consider the problem of inferring the basal sliding coefficientfield for an uncertain Stokes ice sheet forward model from syntheticsurface velocity measurements. The uncertainty in the forward modelstems from unknown (or uncertain) auxiliary parameters (e.g., rheologyparameters). This inverse problem is posed within the Bayesianframework, which provides a systematic means of quantifyinguncertainty in the solution. To account for the associated modeluncertainty (error), we employ the Bayesian approximation error (BAE)approach to approximately premarginalize simultaneously over both thenoise in measurements and uncertainty in the forward model. We alsocarry out approximative posterior uncertainty quantification based ona linearization of the parameter-to-observable map centered at themaximum a posteriori (MAP) basal sliding coefficient estimate, i.e.,by taking the Laplace approximation. The MAP estimate is found byminimizing the negative log posterior using an inexact Newtonconjugate gradient method. The gradient and Hessian actions to vectorsare efficiently computed using adjoints. Sampling from theapproximate covariance is made tractable by invoking a low-rankapproximation of the data misfit component of the Hessian. We studythe performance of the BAE approach in the context of three numericalexamples in two and three dimensions. For each example, the basalsliding coefficient field is the parameter of primary interest whichwe seek to infer, and the rheology parameters (e.g., the flow ratefactor or the Glen's flow law exponent coefficient field) representso-called nuisance (secondary uncertain) parameters. Our resultsindicate that accounting for model uncertainty stemming from thepresence of nuisance parameters is crucial. Namely our findingssuggest that using nominal values for these parameters, as is oftendone in practice, without taking into account the resulting modelingerror, can lead to overconfident and heavily biased results. We alsoshow that the BAE approach can be used to account for the additionalmodel uncertainty at no additional cost at the online stage. 
    more » « less