skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: A New Preconditioned Nonlinear Conjugate Gradient Method in Real Arithmetic for Computing the Ground States of Rotational Bose–Einstein Condensate
We propose a new nonlinear preconditioned conjugate gradient (PCG) method in real arithmetic for computing the ground states of rotational Bose--Einstein condensate, modeled by the Gross--Pitaevskii equation. Our algorithm presents a few improvements of the PCG method in complex arithmetic studied by Antoine, Levitt, and Tang [J. Comput. Phys., 343 (2017), pp. 92--109]. We show that the special structure of the energy functional $$E(\phi)$$ and its gradient with respect to $$\phi$$ can be fully exploited in real arithmetic to evaluate them more efficiently. We propose a simple approach for fast evaluation of the energy functional, which enables exact line search. Most importantly, we derive the discrete Hessian operator of the energy functional and propose a shifted Hessian preconditioner for PCG, with which the ideal preconditioned Hessian has favorable eigenvalue distributions independent of the mesh size. This suggests that PCG with our ideal Hessian preconditioner is expected to exhibit mesh size-independent asymptomatic convergence behavior. In practice, our preconditioner is constructed by incomplete Cholesky factorization of the shifted discrete Hessian operator based on high-order finite difference discretizations. Numerical experiments in two-dimensional (2D) and three-dimensional (3D) domains show the efficiency of fast energy evaluation, the robustness of exact line search, and the improved convergence of PCG with our new preconditioner in iteration counts and runtime, notably for more challenging rotational BEC problems with high nonlinearity and rotational speed.  more » « less
Award ID(s):
2111496
PAR ID:
10528554
Author(s) / Creator(s):
;
Publisher / Repository:
SIAM
Date Published:
Journal Name:
SIAM Journal on Scientific Computing
Volume:
46
Issue:
3
ISSN:
1064-8275
Page Range / eLocation ID:
A1764 to A1792
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In this paper, we explore the non-asymptotic global convergence rates of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method implemented with exact line search. Notably, due to Dixon’s equivalence result, our findings are also applicable to other quasi-Newton methods in the convex Broyden class employing exact line search, such as the Davidon-Fletcher-Powell (DFP) method. Specifically, we focus on problems where the objective function is strongly convex with Lipschitz continuous gradient and Hessian. Our results hold for any initial point and any symmetric positive definite initial Hessian approximation matrix. The analysis unveils a detailed three-phase convergence process, characterized by distinct linear and superlinear rates, contingent on the iteration progress. Additionally, our theoretical findings demonstrate the trade-offs between linear and superlinear convergence rates for BFGS when we modify the initial Hessian approximation matrix, a phenomenon further corroborated by our numerical experiments. 
    more » « less
  2. We consider using gradient descent to minimize the nonconvex function $$f(X)=\phi(XX^{T})$$ over an $$n\times r$$ factor matrix $$X$$, in which $$\phi$$ is an underlying smooth convex cost function defined over $$n\times n$$ matrices. While only a second-order stationary point $$X$$ can be provably found in reasonable time, if $$X$$ is additionally \emph{rank deficient}, then its rank deficiency certifies it as being globally optimal. This way of certifying global optimality necessarily requires the search rank $$r$$ of the current iterate $$X$$ to be \emph{overparameterized} with respect to the rank $$r^{\star}$ of the global minimizer $$X^{\star}$$. Unfortunately, overparameterization significantly slows down the convergence of gradient descent, from a linear rate with $$r=r^{\star}$$ to a sublinear rate when $$r>r^{\star}$$, even when $$\phi$$ is strongly convex. In this paper, we propose an inexpensive preconditioner that restores the convergence rate of gradient descent back to linear in the overparameterized case, while also making it agnostic to possible ill-conditioning in the global minimizer $$X^{\star}$$. 
    more » « less
  3. We propose and analyze a two-scale finite element method for the Isaacs equation. The fine scale is given by the mesh size h whereas the coarse scale ε is dictated by an integro-differential approximation of the partial differential equation. We show that the method satisfies the discrete maximum principle provided that the mesh is weakly acute. This, in conjunction with weak operator consistency of the finite element method, allows us to establish convergence of the numerical solution to the viscosity solution as ε , h → 0, and ε  ≳ ( h |log h |) 1/2 . In addition, using a discrete Alexandrov Bakelman Pucci estimate we deduce rates of convergence, under suitable smoothness assumptions on the exact solution. 
    more » « less
  4. Abstract Discrete ill‐posed inverse problems arise in many areas of science and engineering. Their solutions are very sensitive to perturbations in the data. Regularization methods aim at reducing this sensitivity. This article considers an iterative regularization method, based on iterated Tikhonov regularization, that was proposed in M. Donatelli and M. Hanke, Fast nonstationary preconditioned iterative methods for ill‐posed problems, with application to image deblurring,Inverse Problems, 29 (2013), Art. 095008, 16 pages. In this method, the exact operator is approximated by an operator that is easier to work with. However, the convergence theory requires the approximating operator to be spectrally equivalent to the original operator. This condition is rarely satisfied in practice. Nevertheless, this iterative method determines accurate image restorations in many situations. We propose a modification of the iterative method, that relaxes the demand of spectral equivalence to a requirement that is easier to satisfy. We show that, although the modified method is not an iterative regularization method, it maintains one of the most important theoretical properties for this kind of methods, namely monotonic decrease of the reconstruction error. Several computed experiments show the good performances of the proposed method. 
    more » « less
  5. null (Ed.)
    Full waveform inversion (FWI) and least-squares reverse time migration (LSRTM) are popular imaging techniques that can be solved as PDE-constrained optimization problems. Due to the large-scale nature, gradient- and Hessian-based optimization algorithms are preferred in practice to find the optimizer iteratively. However, a balance between the evaluation cost and the rate of convergence needs to be considered. We propose the use of Anderson acceleration (AA), a popular strategy to speed up the convergence of fixed-point iterations, to accelerate a gradient descent method. We show that AA can achieve fast convergence that provides competitive results with some quasi-Newton methods. Independent of the dimensionality of the unknown parameters, the computational cost of implementing the method can be reduced to an extremely lowdimensional least-squares problem, which makes AA an attractive method for seismic inversion. 
    more » « less