skip to main content


Title: Inverse Problems in Variational Inequalities by Minimizing Energy
This work is dedicated to developing an abstract framework for parameter estimation in elliptic variational inequalities. Differentiability of the parameter to solution map in the optimization problems are studied using a smoothing of the penalty map. We derived necessary optimality conditions for the optimization problems under consideration. The feasibility of the approach is tested through numerical experiments.  more » « less
Award ID(s):
1720067
NSF-PAR ID:
10100986
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Pure and applied functional analysis
Volume:
4
Issue:
2
ISSN:
2189-3756
Page Range / eLocation ID:
247-269
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The Bayesian formulation of inverse problems is attractive for three primary reasons: it provides a clear modelling framework; it allows for principled learning of hyperparameters; and it can provide uncertainty quantification. The posterior distribution may in principle be sampled by means of MCMC or SMC methods, but for many problems it is computationally infeasible to do so. In this situation maximum a posteriori (MAP) estimators are often sought. Whilst these are relatively cheap to compute, and have an attractive variational formulation, a key drawback is their lack of invariance under change of parameterization; it is important to study MAP estimators, however, because they provide a link with classical optimization approaches to inverse problems and the Bayesian link may be used to improve upon classical optimization approaches. The lack of invariance of MAP estimators under change of parameterization is a particularly significant issue when hierarchical priors are employed to learn hyperparameters. In this paper we study the effect of the choice of parameterization on MAP estimators when a conditionally Gaussian hierarchical prior distribution is employed. Specifically we consider the centred parameterization, the natural parameterization in which the unknown state is solved for directly, and the noncentred parameterization, which works with a whitened Gaussian as the unknown state variable, and arises naturally when considering dimension-robust MCMC algorithms; MAP estimation is well-defined in the nonparametric setting only for the noncentred parameterization. However, we show that MAP estimates based on the noncentred parameterization are not consistent as estimators of hyperparameters; conversely, we show that limits of finite-dimensional centred MAP estimators are consistent as the dimension tends to infinity. We also consider empirical Bayesian hyperparameter estimation, show consistency of these estimates, and demonstrate that they are more robust with respect to noise than centred MAP estimates. An underpinning concept throughout is that hyperparameters may only be recovered up to measure equivalence, a well-known phenomenon in the context of the Ornstein–Uhlenbeck process. The applicability of the results is demonstrated concretely with the study of hierarchical Whittle–Matérn and ARD priors. 
    more » « less
  2. Large-scale optimization problems abound in data mining and machine learning applications, and the computational challenges they pose are often addressed through parallelization. We identify structural properties under which a convex optimization problem can be massively parallelized via map-reduce operations using the Frank–Wolfe (FW) algorithm. The class of problems that can be tackled this way is quite broad and includes experimental design, AdaBoost, and projection to a convex hull. Implementing FW via map-reduce eases parallelization and deployment via commercial distributed computing frameworks. We demonstrate this by implementing FW over Spark, an engine for parallel data processing, and establish that parallelization through map-reduce yields significant performance improvements: We solve problems with 20 million variables using 350 cores in 79 min; the same operation takes 48 h when executed serially. 
    more » « less
  3. Purpose

    To introduce a quantitative tool that enables rapid forecasting of T1and T2parameter map errors due to normal and aliasing noise as a function of the MR fingerprinting (MRF) sequence, which can be used in sequence optimization.

    Theory and Methods

    The variances of normal noise and aliasing artifacts in the collected signal are related to the variances in T1and T2maps through derived quality factors. This analytical result is tested against the results of a Monte‐Carlo approach for analyzing MRF sequence encoding capability in the presence of aliasing noise, and verified with phantom experiments at 3 T. To further show the utility of our approach, our quality factors are used to find efficient MRF sequences for fewer repetitions.

    Results

    Experimental results verify the ability of our quality factors to rapidly assess the efficiency of an MRF sequence in the presence of both normal and aliasing noise. Quality factor assessment of MRF sequences is in agreement with the results of a Monte‐Carlo approach. Analysis of MRF parameter map errors from phantom experiments is consistent with the derived quality factors, with T1(T2) data yielding goodness of fit R2≥ 0.92 (0.80). In phantom and in vivo experiments, the efficient pulse sequence, determined through quality factor maximization, led to comparable or improved accuracy and precision relative to a longer sequence, demonstrating quality factor utility in MRF sequence design.

    Conclusion

    The here introduced quality factor framework allows for rapid analysis and optimization of MRF sequence design through T1and T2error forecasting.

     
    more » « less
  4. Statistical relational learning models are powerful tools that combine ideas from first-order logic with probabilistic graphical models to represent complex dependencies. Despite their success in encoding large problems with a compact set of weighted rules, performing inference over these models is often challenging. In this paper, we show how to effectively combine two powerful ideas for scaling inference for large graphical models. The first idea, lifted inference, is a wellstudied approach to speeding up inference in graphical models by exploiting symmetries in the underlying problem. The second idea is to frame Maximum a posteriori (MAP) inference as a convex optimization problem and use alternating direction method of multipliers (ADMM) to solve the problem in parallel. A well-studied relaxation to the combinatorial optimization problem defined for logical Markov random fields gives rise to a hinge-loss Markov random field (HLMRF) for which MAP inference is a convex optimization problem. We show how the formalism introduced for coloring weighted bipartite graphs using a color refinement algorithm can be integrated with the ADMM optimization technique to take advantage of the sparse dependency structures of HLMRFs. Our proposed approach, lifted hinge-loss Markov random fields (LHL-MRFs), preserves the structure of the original problem after lifting and solves lifted inference as distributed convex optimization with ADMM. In our empirical evaluation on real-world problems, we observe up to a three times speed up in inference over HL-MRFs. 
    more » « less
  5. null (Ed.)
    We present an extensible software framework, hIPPYlib, for solution of large-scale deterministic and Bayesian inverse problems governed by partial differential equations (PDEs) with (possibly) infinite-dimensional parameter fields (which are high-dimensional after discretization). hIPPYlib overcomes the prohibitively expensive nature of Bayesian inversion for this class of problems by implementing state-of-the-art scalable algorithms for PDE-based inverse problems that exploit the structure of the underlying operators, notably the Hessian of the log-posterior. The key property of the algorithms implemented in hIPPYlib is that the solution of the inverse problem is computed at a cost, measured in linearized forward PDE solves, that is independent of the parameter dimension. The mean of the posterior is approximated by the MAP point, which is found by minimizing the negative log-posterior with an inexact matrix-free Newton-CG method. The posterior covariance is approximated by the inverse of the Hessian of the negative log posterior evaluated at the MAP point. The construction of the posterior covariance is made tractable by invoking a low-rank approximation of the Hessian of the log-likelihood. Scalable tools for sample generation are also discussed. hIPPYlib makes all of these advanced algorithms easily accessible to domain scientists and provides an environment that expedites the development of new algorithms. 
    more » « less