skip to main content


Title: Acceleration of nonlinear solvers for natural convection problems
Abstract This paper develops an efficient and robust solution technique for the steady Boussinesq model of non-isothermal flow using Anderson acceleration applied to a Picard iteration. After analyzing the fixed point operator associated with the nonlinear iteration to prove that certain stability and regularity properties hold, we apply the authors’ recently constructed theory for Anderson acceleration, which yields a convergence result for the Anderson accelerated Picard iteration for the Boussinesq system. The result shows that the leading term in the residual is improved by the gain in the optimization problem, but at the cost of additional higher order terms that can be significant when the residual is large. We perform numerical tests that illustrate the theory, and show that a 2-stage choice of Anderson depth can be advantageous. We also consider Anderson acceleration applied to the Newton iteration for the Boussinesq equations, and observe that the acceleration allows the Newton iteration to converge for significantly higher Rayleigh numbers that it could without acceleration, even with a standard line search.  more » « less
Award ID(s):
2011490
NSF-PAR ID:
10327786
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Journal of Numerical Mathematics
Volume:
29
Issue:
4
ISSN:
1570-2820
Page Range / eLocation ID:
323 to 341
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    State-of-the-art seismic imaging techniques treat inversion tasks such as full-waveform inversion (FWI) and least-squares reverse time migration (LSRTM) as partial differential equation-constrained optimization problems. Due to the large-scale nature, gradient-based optimization algorithms are preferred in practice to update the model iteratively. Higher-order methods converge in fewer iterations but often require higher computational costs, more line-search steps, and bigger memory storage. A balance among these aspects has to be considered. We have conducted an evaluation using Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate the steepest-descent algorithm, which we innovatively treat as a fixed-point iteration. Independent of the unknown parameter dimensionality, the computational cost of implementing the method can be reduced to an extremely low dimensional least-squares problem. The cost can be further reduced by a low-rank update. We determine the theoretical connections and the differences between AA and other well-known optimization methods such as L-BFGS and the restarted generalized minimal residual method and compare their computational cost and memory requirements. Numerical examples of FWI and LSRTM applied to the Marmousi benchmark demonstrate the acceleration effects of AA. Compared with the steepest-descent method, AA can achieve faster convergence and can provide competitive results with some quasi-Newton methods, making it an attractive optimization strategy for seismic inversion. 
    more » « less
  2. The incremental Picard Yosida (IPY) method has recently been developed as an iteration for nonlinear saddle point problems that is as effective as Picard but more efficient. By combining ideas from algebraic splitting of linear saddle point solvers with incremental Picard‐type iterations and grad‐div stabilization, IPY improves on the standard Picard method by allowing for easier linear solves at each iteration—but without creating more total nonlinear iterations compared to Picard. This paper extends the IPY methodology by studying it together with Anderson acceleration (AA). We prove that IPY for Navier–Stokes and regularized Bingham fits the recently developed analysis framework for AA, which implies that AA improves the linear convergence rate of IPY by scaling the rate with the gain of the AA optimization problem. Numerical tests illustrate a significant improvement in convergence behavior of IPY methods from AA, for both Navier–Stokes and regularized Bingham.

     
    more » « less
  3. Abstract A one-step analysis of Anderson acceleration with general algorithmic depths is presented. The resulting residual bounds within both contractive and noncontractive settings reveal the balance between the contributions from the higher and lower order terms, which are both dependent on the success of the optimization problem solved at each step of the algorithm. The new residual bounds show the additional terms introduced by the extrapolation produce terms that are of a higher order than was previously understood. In the contractive setting these bounds sharpen previous convergence and acceleration results. The bounds rely on sufficient linear independence of the differences between consecutive residuals, rather than assumptions on the boundedness of the optimization coefficients, allowing the introduction of a theoretically sound safeguarding strategy. Several numerical tests illustrate the analysis primarily in the noncontractive setting, and demonstrate the use of the method, the safeguarding strategy and theory-based guidance on dynamic selection of the algorithmic depth, on a p-Laplace equation, a nonlinear Helmholtz equation and the steady Navier–Stokes equations with high Reynolds number in three spatial dimensions. 
    more » « less
  4. This paper continues some recent work on the numerical solution of the steady incompressible Navier–Stokes equations. We present a new method, similar to the one presented in Rebholz et al., but with superior convergence and numerical properties. The method is efficient as it allows one to solve the same symmetric positive‐definite system for the pressure at each iteration, allowing for the simple preconditioning and the reuse of preconditioners. We also demonstrate how one can replace the Schur complement system with a diagonal matrix inversion while maintaining accuracy and convergence, at a small fraction of the numerical cost. Convergence is analyzed for Newton and Picard‐type algorithms, as well as for the Schur complement approximation.

     
    more » « less
  5. In this paper, we focus on the computation of the nonparametric maximum likelihood es- timator (NPMLE) in multivariate mixture models. Our approach discretizes this infinite dimensional convex optimization problem by setting fixed support points for the NPMLE and optimizing over the mixing proportions. We propose an efficient and scalable semis- mooth Newton based augmented Lagrangian method (ALM). Our algorithm outperforms the state-of-the-art methods (Kim et al., 2020; Koenker and Gu, 2017), capable of handling n ≈ 106 data points with m ≈ 104 support points. A key advantage of our approach is its strategic utilization of the solution’s sparsity, leading to structured sparsity in Hessian computations. As a result, our algorithm demonstrates better scaling in terms of m when compared to the mixsqp method (Kim et al., 2020). The computed NPMLE can be directly applied to denoising the observations in the framework of empirical Bayes. We propose new denoising estimands in this context along with their consistent estimates. Extensive nu- merical experiments are conducted to illustrate the efficiency of our ALM. In particular, we employ our method to analyze two astronomy data sets: (i) Gaia-TGAS Catalog (Anderson et al., 2018) containing approximately 1.4 × 106 data points in two dimensions, and (ii) a data set from the APOGEE survey (Majewski et al., 2017) with approximately 2.7 × 104 data points. 
    more » « less