Abstract We analyze and test a simple-to-implement two-step iteration for the incompressible Navier-Stokes equations that consists of first applying the Picard iteration and then applying the Newton iteration to the Picard output. We prove that this composition of Picard and Newton converges quadratically, and our analysis (which covers both the unique solution and non-unique solution cases) also suggests that this solver has a larger convergence basin than usual Newton because of the improved stability properties of Picard-Newton over Newton. Numerical tests show that Picard-Newton converges more reliably for higher Reynolds numbers and worse initial conditions than Picard and Newton iterations. We also consider enhancing the Picard step with Anderson acceleration (AA), and find that the AAPicard-Newton iteration has even better convergence properties on several benchmark test problems.
more »
« less
Acceleration of nonlinear solvers for natural convection problems
Abstract This paper develops an efficient and robust solution technique for the steady Boussinesq model of non-isothermal flow using Anderson acceleration applied to a Picard iteration. After analyzing the fixed point operator associated with the nonlinear iteration to prove that certain stability and regularity properties hold, we apply the authors’ recently constructed theory for Anderson acceleration, which yields a convergence result for the Anderson accelerated Picard iteration for the Boussinesq system. The result shows that the leading term in the residual is improved by the gain in the optimization problem, but at the cost of additional higher order terms that can be significant when the residual is large. We perform numerical tests that illustrate the theory, and show that a 2-stage choice of Anderson depth can be advantageous. We also consider Anderson acceleration applied to the Newton iteration for the Boussinesq equations, and observe that the acceleration allows the Newton iteration to converge for significantly higher Rayleigh numbers that it could without acceleration, even with a standard line search.
more »
« less
- Award ID(s):
- 2011490
- PAR ID:
- 10327786
- Date Published:
- Journal Name:
- Journal of Numerical Mathematics
- Volume:
- 29
- Issue:
- 4
- ISSN:
- 1570-2820
- Page Range / eLocation ID:
- 323 to 341
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper we develop convergence and acceleration theory for Anderson acceleration applied to Newton’s method for nonlinear systems in which the Jacobian is singular at a solution. For these problems, the standard Newton algorithm converges linearly in a region about the solution; and, it has been previously observed that Anderson acceleration can substantially improve convergence without additional a priori knowledge, and with little additional computation cost. We present an analysis of the Newton-Anderson algorithm in this context, and introduce a novel and theoretically supported safeguarding strategy. The convergence results are demonstrated with the Chandrasekhar H-equation and a variety of benchmark examples.more » « less
-
The incremental Picard Yosida (IPY) method has recently been developed as an iteration for nonlinear saddle point problems that is as effective as Picard but more efficient. By combining ideas from algebraic splitting of linear saddle point solvers with incremental Picard‐type iterations and grad‐div stabilization, IPY improves on the standard Picard method by allowing for easier linear solves at each iteration—but without creating more total nonlinear iterations compared to Picard. This paper extends the IPY methodology by studying it together with Anderson acceleration (AA). We prove that IPY for Navier–Stokes and regularized Bingham fits the recently developed analysis framework for AA, which implies that AA improves the linear convergence rate of IPY by scaling the rate with the gain of the AA optimization problem. Numerical tests illustrate a significant improvement in convergence behavior of IPY methods from AA, for both Navier–Stokes and regularized Bingham.more » « less
-
null (Ed.)State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion.more » « less
-
The purpose of this paper is to develop a practical strategy to accelerate Newton’s method in the vicinity of singular points. We present an adaptive safeguarding scheme with a tunable parameter, which we call adaptive γ-safeguarding, that one can use in tandem with Anderson acceleration to improve the performance of Newton’s method when solving problems at or near singular points. The key features of adaptive γ-safeguarding are that it converges locally for singular problems, and it can detect nonsingular problems automatically, in which case the Newton-Anderson iterates are scaled towards a standard Newton step. The result is a flexible algorithm that performs well for singular and nonsingular problems, and can recover convergence from both standard Newton and Newton-Anderson with the right parameter choice. This leads to faster local convergence compared to both Newton’s method, and Newton-Anderson without safeguarding, with effectively no additional computational cost. We demonstrate three strategies one can use when implementing Newton-Anderson and γ-safeguarded Newton-Anderson to solve parameter-dependent problems near singular points. For our benchmark problems, we take two parameter-dependent incompressible flow systems: flow in a channel and Rayleigh-Benard convection.more » « less
An official website of the United States government

