Title: Anderson acceleration for seismic inversion
State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion. more »« less
Yang, Yunan
(, SEG Technical Program Expanded Abstracts 2020)
null
(Ed.)
Full waveform inversion (FWI) and least-squares reverse time migration (LSRTM) are popular imaging techniques that can be solved as PDE-constrained optimization problems. Due to the large-scale nature, gradient- and Hessian-based optimization algorithms are preferred in practice to find the optimizer iteratively. However, a balance between the evaluation cost and the rate of convergence needs to be considered. We propose the use of Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate a gradient descent method. We show that AA can achieve fast convergence that provides competitive results with some quasi-Newton methods. Independent of the dimensionality of the unknown parameters, the computational cost of implementing the method can be reduced to an extremely lowdimensional least-squares problem, which makes AA an attractive method for seismic inversion.
Liu, Jia; Rebholz, Leo G.; Xiao, Mengying
(, Mathematical Methods in the Applied Sciences)
The incremental Picard Yosida (IPY) method has recently been developed as an iteration for nonlinear saddle point problems that is as effective as Picard but more efficient. By combining ideas from algebraic splitting of linear saddle point solvers with incremental Picard‐type iterations and grad‐div stabilization, IPY improves on the standard Picard method by allowing for easier linear solves at each iteration—but without creating more total nonlinear iterations compared to Picard. This paper extends the IPY methodology by studying it together with Anderson acceleration (AA). We prove that IPY for Navier–Stokes and regularized Bingham fits the recently developed analysis framework for AA, which implies that AA improves the linear convergence rate of IPY by scaling the rate with the gain of the AA optimization problem. Numerical tests illustrate a significant improvement in convergence behavior of IPY methods from AA, for both Navier–Stokes and regularized Bingham.
A pervasive approach in scientific computing is to express the solution to a given problem as the limit of a sequence of vectors or other mathematical objects. In many situations these sequences are generated by slowly converging iterative procedures, and this led practitioners to seek faster alternatives to reach the limit. ‘Acceleration techniques’ comprise a broad array of methods specifically designed with this goal in mind. They started as a means of improving the convergence of general scalar sequences by various forms of ‘extrapolation to the limit’, i.e. by extrapolating the most recent iterates to the limit via linear combinations. Extrapolation methods of this type, the best-known of which is Aitken’s delta-squared process, require only the sequence of vectors as input. However, limiting methods to use only the iterates is too restrictive. Accelerating sequences generated by fixed-point iterations by utilizing both the iterates and the fixed-point mapping itself has proved highly successful across various areas of physics. A notable example of these fixed-point accelerators (FP-accelerators) is a method developed by Donald Anderson in 1965 and now widely known as Anderson acceleration (AA). Furthermore, quasi-Newton and inexact Newton methods can also be placed in this category since they can be invoked to find limits of fixed-point iteration sequences by employing exactly the same ingredients as those of the FP-accelerators. This paper presents an overview of these methods – with an emphasis on those, such as AA, that are geared toward accelerating fixed-point iterations. We will navigate through existing variants of accelerators, their implementations and their applications, to unravel the close connections between them. These connections were often not recognized by the originators of certain methods, who sometimes stumbled on slight variations of already established ideas. Furthermore, even though new accelerators were invented in different corners of science, the underlying principles behind them are strikingly similar or identical. The plan of this article will approximately follow the historical trajectory of extrapolation and acceleration methods, beginning with a brief description of extrapolation ideas, followed by the special case of linear systems, the application to self-consistent field (SCF) iterations, and a detailed view of Anderson acceleration. The last part of the paper is concerned with more recent developments, including theoretical aspects, and a few thoughts on accelerating machine learning algorithms.
Mehra, Akshay; Hamm, Jihunn
(, Proceedings of Machine Learning Research 157, 2021)
Solving a bilevel optimization problem is at the core of several machine learning problems such as hyperparameter tuning, data denoising, meta- and few-shot learning, and training-data poisoning. Different from simultaneous or multi-objective optimization, the steepest descent direction for minimizing the upper-level cost in a bilevel problem requires the inverse of the Hessian of the lower-level cost. In this work, we propose a novel algorithm for solving bilevel optimization problems based on the classical penalty function approach. Our method avoids computing the Hessian inverse and can handle constrained bilevel problems easily. We prove the convergence of the method under mild conditions and show that the exact hypergradient is obtained asymptotically. Our method's simplicity and small space and time complexities enable us to effectively solve large-scale bilevel problems involving deep neural networks. We present results on data denoising, few-shot learning, and training-data poisoning problems in a large-scale setting. Our results show that our approach outperforms or is comparable to previously proposed methods based on automatic differentiation and approximate inversion in terms of accuracy, run-time, and convergence speed.
Engquist, Björn; Yang, Yunan
(, Communications on Pure and Applied Mathematics)
Varadhan, S.R.S.
(Ed.)
Full-waveform inversion (FWI) is today a standard process for the inverse problem of seismic imaging. PDE-constrained optimization is used to determine unknown parameters in a wave equation that represent geophysical properties. The objective function measures the misfit between the observed data and the calculated synthetic data, and it has traditionally been the least-squares norm. In a sequence of papers, we introduced the Wasserstein metric from optimal transport as an alternative misfit function for mitigating the so-called cycle skipping, which is the trapping of the optimization process in local minima. In this paper, we first give a sharper theorem regarding the convexity of the Wasserstein metric as the objective function. We then focus on two new issues. One is the necessary normalization of turning seismic signals into probability measures such that the theory of optimal transport applies. The other, which is beyond cycle skipping, is the inversion for parameters below reflecting interfaces. For the first, we propose a class of normalizations and prove several favorable properties for this class. For the latter, we demonstrate that FWI using optimal transport can recover geophysical properties from domains where no seismic waves travel through. We finally illustrate these properties by the realistic application of imaging salt inclusions, which has been a significant challenge in exploration geophysics.
Yang, Yunan. Anderson acceleration for seismic inversion. Retrieved from https://par.nsf.gov/biblio/10252867. GEOPHYSICS 86.1 Web. doi:10.1190/geo2020-0462.1.
Yang, Yunan. Anderson acceleration for seismic inversion. GEOPHYSICS, 86 (1). Retrieved from https://par.nsf.gov/biblio/10252867. https://doi.org/10.1190/geo2020-0462.1
@article{osti_10252867,
place = {Country unknown/Code not available},
title = {Anderson acceleration for seismic inversion},
url = {https://par.nsf.gov/biblio/10252867},
DOI = {10.1190/geo2020-0462.1},
abstractNote = {State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion.},
journal = {GEOPHYSICS},
volume = {86},
number = {1},
author = {Yang, Yunan},
editor = {null}
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.