Abstract We analyze and test a simple-to-implement two-step iteration for the incompressible Navier-Stokes equations that consists of first applying the Picard iteration and then applying the Newton iteration to the Picard output. We prove that this composition of Picard and Newton converges quadratically, and our analysis (which covers both the unique solution and non-unique solution cases) also suggests that this solver has a larger convergence basin than usual Newton because of the improved stability properties of Picard-Newton over Newton. Numerical tests show that Picard-Newton converges more reliably for higher Reynolds numbers and worse initial conditions than Picard and Newton iterations. We also consider enhancing the Picard step with Anderson acceleration (AA), and find that the AAPicard-Newton iteration has even better convergence properties on several benchmark test problems.
more »
« less
Efficient and effective algebraic splitting‐based solvers for nonlinear saddle point problems
The incremental Picard Yosida (IPY) method has recently been developed as an iteration for nonlinear saddle point problems that is as effective as Picard but more efficient. By combining ideas from algebraic splitting of linear saddle point solvers with incremental Picard‐type iterations and grad‐div stabilization, IPY improves on the standard Picard method by allowing for easier linear solves at each iteration—but without creating more total nonlinear iterations compared to Picard. This paper extends the IPY methodology by studying it together with Anderson acceleration (AA). We prove that IPY for Navier–Stokes and regularized Bingham fits the recently developed analysis framework for AA, which implies that AA improves the linear convergence rate of IPY by scaling the rate with the gain of the AA optimization problem. Numerical tests illustrate a significant improvement in convergence behavior of IPY methods from AA, for both Navier–Stokes and regularized Bingham.
more »
« less
- Award ID(s):
- 2011490
- PAR ID:
- 10482321
- Publisher / Repository:
- Wiley Blackwell (John Wiley & Sons)
- Date Published:
- Journal Name:
- Mathematical Methods in the Applied Sciences
- Volume:
- 47
- Issue:
- 1
- ISSN:
- 0170-4214
- Format(s):
- Medium: X Size: p. 451-474
- Size(s):
- p. 451-474
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract A transformed primal-dual (TPD) flow is developed for a class of nonlinear smooth saddle point system. The flow for the dual variable contains a Schur complement which is strongly convex. Exponential stability of the saddle point is obtained by showing the strong Lyapunov property. Several TPD iterations are derived by implicit Euler, explicit Euler, implicit-explicit and Gauss-Seidel methods with accelerated overrelaxation of the TPD flow. Generalized to the symmetric TPD iterations, linear convergence rate is preserved for convex-concave saddle point systems under assumptions that the regularized functions are strongly convex. The effectiveness of augmented Lagrangian methods can be explained as a regularization of the non-strongly convexity and a preconditioning for the Schur complement. The algorithm and convergence analysis depends crucially on appropriate inner products of the spaces for the primal variable and dual variable. A clear convergence analysis with nonlinear inexact inner solvers is also developed.more » « less
-
Stochastic second-order methods accelerate local convergence in strongly convex optimization by using noisy Hessian estimates to precondition gradients. However, they typically achieve superlinear convergence only when Hessian noise diminishes, which increases per-iteration costs. Prior work [arXiv:2204.09266] introduced a Hessian averaging scheme that maintains low per-iteration cost while achieving superlinear convergence, but with slow global convergence, requiring 𝑂 ~ ( 𝜅 2 ) O ~ (κ 2 ) iterations to reach the superlinear rate of 𝑂 ~ ( ( 1 / 𝑡 ) 𝑡 / 2 ) O ~ ((1/t) t/2 ), where 𝜅 κ is the condition number. This paper proposes a stochastic Newton proximal extragradient method that improves these bounds, delivering faster global linear convergence and achieving the same fast superlinear rate in only 𝑂 ~ ( 𝜅 ) O ~ (κ) iterations. The method extends the Hybrid Proximal Extragradient (HPE) framework, yielding improved global and local convergence guarantees for strongly convex functions with access to a noisy Hessian oracle.more » « less
-
null (Ed.)State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion.more » « less
-
Stochastic second-order methods are known to achieve fast local convergence in strongly convex optimization by relying on noisy Hessian estimates to precondition the gradient. Yet, most of these methods achieve superlinear convergence only when the stochastic Hessian noise diminishes, requiring an increase in the per-iteration cost as time progresses. Recent work in \cite{na2022hessian} addressed this issue via a Hessian averaging scheme that achieves a superlinear convergence rate without increasing the per-iteration cost. However, the considered method exhibits a slow global convergence rate, requiring up to ~O(κ^2) iterations to reach the superlinear rate of ~O((1/t)^{t/2}), where κ is the problem's condition number. In this paper, we propose a novel stochastic Newton proximal extragradient method that significantly improves these bounds, achieving a faster global linear rate and reaching the same fast superlinear rate in ~O(κ) iterations. We achieve this by developing a novel extension of the Hybrid Proximal Extragradient (HPE) framework, which simultaneously achieves fast global and local convergence rates for strongly convex functions with access to a noisy Hessian oracle.more » « less
An official website of the United States government
