This work concerns the local convergence theory of Newton and quasi-Newton methods for convex-composite optimization: where one minimizes an objective that can be written as the composition of a convex function with one that is continuiously differentiable. We focus on the case in which the convex function is a potentially infinite-valued piecewise linear-quadratic function. Such problems include nonlinear programming, mini-max optimization, and estimation of nonlinear dynamics with non-Gaussian noise as well as many modern approaches to large-scale data analysis and machine learning. Our approach embeds the optimality conditions for convex-composite optimization problems into a generalized equation. We establish conditions for strong metric subregularity and strong metric regularity of the corresponding set-valued mappings. This allows us to extend classical convergence of Newton and quasi-Newton methods to the broader class of nonfinite valued piecewise linear-quadratic convex-composite optimization problems. In particular, we establish local quadratic convergence of the Newton method under conditions that parallel those in nonlinear programming.
more »
« less
A NONLINEAR ELIMINATION PRECONDITIONED INEXACT NEWTON METHOD FOR HETEROGENEOUS HYPERELASTICITY
We propose and study a nonlinear elimination preconditioned inexact Newton method for the numerical simulation of diseased human arteries with a heterogeneous hyperelastic model. We assume the artery is made of layers of distinct tissues and also contains plaque. Traditional Newton methods often work well for smooth and homogeneous arteries but suffer from slow or no convergence due to the heterogeneousness of diseased soft tissues when the material is quasi-incompressible. The proposed nonlinear elimination method adaptively finds a small number of equations causing the nonlinear stagnation and then eliminates them from the global nonlinear system. By using the theory of affine invariance of Newton method, we provide insight into why the nonlinear elimination method can improve the convergence of Newton iterations. Our numerical results show that the combination of nonlinear elimination with an initial guess interpolated from a coarse level solution can lead to the uniform convergence of Newton method for this class of very difficult nonlinear problems.
more »
« less
- Award ID(s):
- 1720366
- PAR ID:
- 10155912
- Date Published:
- Journal Name:
- SIAM journal on scientific computing
- ISSN:
- 1064-8275
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In this paper, we introduce a quasi-Newton method optimized for efficiently solving quasi-linear elliptic equations and systems, with a specific focus on GPU-based computation. By approximating the Jacobian matrix with a combination of linear Laplacian and simplified nonlinear terms, our method reduces the computational overhead typical of traditional Newton methods while handling the large, sparse matrices generated from discretized PDEs. We also provide a convergence analysis demonstrating local convergence to the exact solution under optimal choices for the regularization parameter, ensuring stability and efficiency in each iteration. Numerical experiments in two- and three-dimensional domains validate the proposed method’s robustness and computational gains with tensor product implementation. This approach offers a promising pathway for accelerating quasi-linear elliptic equations and systems solvers, expanding the feasibility of complex simulations in physics, engineering, and other fields leveraging advanced hardware capabilities.more » « less
-
In this paper we develop convergence and acceleration theory for Anderson acceleration applied to Newton’s method for nonlinear systems in which the Jacobian is singular at a solution. For these problems, the standard Newton algorithm converges linearly in a region about the solution; and, it has been previously observed that Anderson acceleration can substantially improve convergence without additional a priori knowledge, and with little additional computation cost. We present an analysis of the Newton-Anderson algorithm in this context, and introduce a novel and theoretically supported safeguarding strategy. The convergence results are demonstrated with the Chandrasekhar H-equation and a variety of benchmark examples.more » « less
-
Krylov Cubic Regularized Newton: A Subspace Second-Order Method with Dimension-Free Convergence RateSecond-order optimization methods, such as cubic regularized Newton methods, are known for their rapid convergence rates; nevertheless, they become impractical in high-dimensional problems due to their substantial memory requirements and computational costs. One promising approach is to execute second order updates within a lower-dimensional subspace, giving rise to \textit{subspace second-order} methods. However, the majority of existing subspace second-order methods randomly select subspaces, consequently resulting in slower convergence rates depending on the problem's dimension $$d$$. In this paper, we introduce a novel subspace cubic regularized Newton method that achieves a dimension-independent global convergence rate of $$\bigO\left(\frac{1}{mk}+\frac{1}{k^2}\right)$$ for solving convex optimization problems. Here, $$m$$ represents the subspace dimension, which can be significantly smaller than $$d$$. Instead of adopting a random subspace, our primary innovation involves performing the cubic regularized Newton update within the \emph{Krylov subspace} associated with the Hessian and the gradient of the objective function. This result marks the first instance of a dimension-independent convergence rate for a subspace second-order method. Furthermore, when specific spectral conditions of the Hessian are met, our method recovers the convergence rate of a full-dimensional cubic regularized Newton method. Numerical experiments show our method converges faster than existing random subspace methods, especially for high-dimensional problems.more » « less
-
Non-asymptotic analysis of quasi-Newton methods have gained traction recently. In particular, several works have established a non-asymptotic superlinear rate of O((1/sqrt{t})^t) for the (classic) BFGS method by exploiting the fact that its error of Newton direction approximation approaches zero. Moreover, a greedy variant of BFGS was recently proposed which accelerates its convergence by directly approximating the Hessian, instead of the Newton direction, and achieves a fast local quadratic convergence rate. Alas, the local quadratic convergence of Greedy-BFGS requires way more updates compared to the number of iterations that BFGS requires for a local superlinear rate. This is due to the fact that in Greedy-BFGS the Hessian is directly approximated and the Newton direction approximation may not be as accurate as the one for BFGS. In this paper, we close this gap and present a novel BFGS method that has the best of both worlds in that it leverages the approximation ideas of both BFGS and GreedyBFGS to properly approximate the Newton direction and the Hessian matrix simultaneously. Our theoretical results show that our method outperforms both BFGS and Greedy-BFGS in terms of convergence rate, while it reaches its quadratic convergence rate with fewer steps compared to Greedy-BFGS. Numerical experiments on various datasets also confirm our theoretical findings.more » « less
An official website of the United States government

