skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM ET on Friday, February 6 until 10:00 AM ET on Saturday, February 7 due to maintenance. We apologize for the inconvenience.


Title: Acceleration methods for fixed-point iterations
A pervasive approach in scientific computing is to express the solution to a given problem as the limit of a sequence of vectors or other mathematical objects. In many situations these sequences are generated by slowly converging iterative procedures, and this led practitioners to seek faster alternatives to reach the limit. ‘Acceleration techniques’ comprise a broad array of methods specifically designed with this goal in mind. They started as a means of improving the convergence of general scalar sequences by various forms of ‘extrapolation to the limit’, i.e. by extrapolating the most recent iterates to the limit via linear combinations. Extrapolation methods of this type, the best-known of which is Aitken’s delta-squared process, require only the sequence of vectors as input. However, limiting methods to use only the iterates is too restrictive. Accelerating sequences generated by fixed-point iterations by utilizing both the iterates and the fixed-point mapping itself has proved highly successful across various areas of physics. A notable example of these fixed-point accelerators (FP-accelerators) is a method developed by Donald Anderson in 1965 and now widely known as Anderson acceleration (AA). Furthermore, quasi-Newton and inexact Newton methods can also be placed in this category since they can be invoked to find limits of fixed-point iteration sequences by employing exactly the same ingredients as those of the FP-accelerators. This paper presents an overview of these methods – with an emphasis on those, such as AA, that are geared toward accelerating fixed-point iterations. We will navigate through existing variants of accelerators, their implementations and their applications, to unravel the close connections between them. These connections were often not recognized by the originators of certain methods, who sometimes stumbled on slight variations of already established ideas. Furthermore, even though new accelerators were invented in different corners of science, the underlying principles behind them are strikingly similar or identical. The plan of this article will approximately follow the historical trajectory of extrapolation and acceleration methods, beginning with a brief description of extrapolation ideas, followed by the special case of linear systems, the application to self-consistent field (SCF) iterations, and a detailed view of Anderson acceleration. The last part of the paper is concerned with more recent developments, including theoretical aspects, and a few thoughts on accelerating machine learning algorithms.  more » « less
Award ID(s):
2208456
PAR ID:
10635057
Author(s) / Creator(s):
Publisher / Repository:
Acta Numerica
Date Published:
Journal Name:
Acta Numerica
Volume:
34
ISSN:
0962-4929
Page Range / eLocation ID:
805 to 890
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion. 
    more » « less
  2. Abstract This paper examines a number of extrapolation and acceleration methods and introduces a few modifications of the standard Shanks transformation that deal with general sequences. One of the goals of the paper is to lay out a general framework that encompasses most of the known acceleration strategies. The paper also considers the Anderson Acceleration (AA) method under a new light and exploits a connection with quasi-Newton methods in order to establish local linear convergence results of a stabilized version of the AA method. The methods are tested on a number of problems, including a few that arise from nonlinear partial differential equations. 
    more » « less
  3. The incremental Picard Yosida (IPY) method has recently been developed as an iteration for nonlinear saddle point problems that is as effective as Picard but more efficient. By combining ideas from algebraic splitting of linear saddle point solvers with incremental Picard‐type iterations and grad‐div stabilization, IPY improves on the standard Picard method by allowing for easier linear solves at each iteration—but without creating more total nonlinear iterations compared to Picard. This paper extends the IPY methodology by studying it together with Anderson acceleration (AA). We prove that IPY for Navier–Stokes and regularized Bingham fits the recently developed analysis framework for AA, which implies that AA improves the linear convergence rate of IPY by scaling the rate with the gain of the AA optimization problem. Numerical tests illustrate a significant improvement in convergence behavior of IPY methods from AA, for both Navier–Stokes and regularized Bingham. 
    more » « less
  4. null (Ed.)
    Full waveform inversion (FWI) and least-squares reverse time migration (LSRTM) are popular imaging techniques that can be solved as PDE-constrained optimization problems. Due to the large-scale nature, gradient- and Hessian-based optimization algorithms are preferred in practice to find the optimizer iteratively. However, a balance between the evaluation cost and the rate of convergence needs to be considered. We propose the use of Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate a gradient descent method. We show that AA can achieve fast convergence that provides competitive results with some quasi-Newton methods. Independent of the dimensionality of the unknown parameters, the computational cost of implementing the method can be reduced to an extremely lowdimensional least-squares problem, which makes AA an attractive method for seismic inversion. 
    more » « less
  5. Abstract We analyze and test a simple-to-implement two-step iteration for the incompressible Navier-Stokes equations that consists of first applying the Picard iteration and then applying the Newton iteration to the Picard output. We prove that this composition of Picard and Newton converges quadratically, and our analysis (which covers both the unique solution and non-unique solution cases) also suggests that this solver has a larger convergence basin than usual Newton because of the improved stability properties of Picard-Newton over Newton. Numerical tests show that Picard-Newton converges more reliably for higher Reynolds numbers and worse initial conditions than Picard and Newton iterations. We also consider enhancing the Picard step with Anderson acceleration (AA), and find that the AAPicard-Newton iteration has even better convergence properties on several benchmark test problems. 
    more » « less