skip to main content


Title: Numerical solution of large scale Hartree–Fock–Bogoliubov equations
The Hartree–Fock–Bogoliubov (HFB) theory is the starting point for treating superconducting systems. However, the computational cost for solving large scale HFB equations can be much larger than that of the Hartree–Fock equations, particularly when the Hamiltonian matrix is sparse, and the number of electrons N is relatively small compared to the matrix size N b . We first provide a concise and relatively self-contained review of the HFB theory for general finite sized quantum systems, with special focus on the treatment of spin symmetries from a linear algebra perspective. We then demonstrate that the pole expansion and selected inversion (PEXSI) method can be particularly well suited for solving large scale HFB equations. For a Hubbard-type Hamiltonian, the cost of PEXSI is at most 𝒪( N b 2 ) for both gapped and gapless systems, which can be significantly faster than the standard cubic scaling diagonalization methods. We show that PEXSI can solve a two-dimensional Hubbard-Hofstadter model with N b up to 2.88 × 10 6 , and the wall clock time is less than 100 s using 17 280 CPU cores. This enables the simulation of physical systems under experimentally realizable magnetic fields, which cannot be otherwise simulated with smaller systems.  more » « less
Award ID(s):
1652330
NSF-PAR ID:
10322303
Author(s) / Creator(s):
;
Date Published:
Journal Name:
ESAIM: Mathematical Modelling and Numerical Analysis
Volume:
55
Issue:
3
ISSN:
0764-583X
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Numerical difficulties associated with computing matrix elements of operators between Hartree–Fock–Bogoliubov (HFB) wavefunctions have plagued the development of HFB-based many-body theories for decades. The problem arises from divisions by zero in the standard formulation of the nonorthogonal Wick’s theorem in the limit of vanishing HFB overlap. In this Communication, we present a robust formulation of Wick’s theorem that stays well-behaved regardless of whether the HFB states are orthogonal or not. This new formulation ensures cancellation between the zeros of the overlap and the poles of the Pfaffian, which appears naturally in fermionic systems. Our formula explicitly eliminates self-interaction, which otherwise causes additional numerical challenges. A computationally efficient version of our formalism enables robust symmetry-projected HFB calculations with the same computational cost as mean-field theories. Moreover, we avoid potentially diverging normalization factors by introducing a robust normalization procedure. The resulting formalism treats even and odd number of particles on equal footing and reduces to Hartree–Fock as a natural limit. As proof of concept, we present a numerically stable and accurate solution to a Jordan–Wigner-transformed Hamiltonian, whose singularities motivated the present work. Our robust formulation of Wick’s theorem is a most promising development for methods using quasiparticle vacuum states.

     
    more » « less
  2. Abstract We introduce the map of dynamics of quantum Bose gases into dynamics of quasifree states, which we call the “nonlinear quasifree approximation”. We use this map to derive the time-dependent Hartree–Fock–Bogoliubov (HFB) equations describing the dynamics of quantum fluctuations around a Bose–Einstein condensate. We prove global well-posedness of the HFB equations for pair potentials satisfying suitable regularity conditions, and we establish important conservation laws. We show that the space of solutions of the HFB equations has a symplectic structure reminiscent of a Hamiltonian system. This is then used to relate the HFB equations to the HFB eigenvalue equations discussed in the physics literature. We also construct Gibbs equilibrium states at positive temperature associated with the HFB equations, and we establish criteria for the appearance of Bose–Einstein condensation. 
    more » « less
  3. We consider the problem of learning density‐dependent molecular Hamiltonian matrices from time series of electron density matrices, all in the context of Hartree–Fock theory. Prior work developed a solution to this problem for small molecular systems with density and Hamiltonian matrices of size at most 6 × 6. Here, using a battery of techniques, we scale prior methods to larger molecular systems with, for example, 29 × 29 matrices. This includes systems that either have more electrons or are expressed in large basis sets such as 6‐311++G**. Scaling the method to larger systems enhances its relevance for realistic applications in chemistry and physics. To achieve this scaling, we apply dimensionality reduction, ridge regression and analytic computation of Hessians. Through the combination of these techniques, we are able to learn Hamiltonians by minimizing an objective function that encodes local propagation error. Importantly, these learned Hamiltonians can then be used to predict electron dynamics for thousands of steps: When we use our learned Hamiltonians to numerically solve the time‐dependent Hartree–Fock equation, we obtain predicted dynamics that are in close quantitative agreement with ground truth dynamics. This includes field‐off trajectories similar to the training data and field‐on trajectories outside of the training data.

     
    more » « less
  4. Abstract

    We present a graph‐theoretic approach to adaptively compute many‐body approximations in an efficient manner to perform (a) accurate post‐Hartree–Fock (HF) ab initio molecular dynamics (AIMD) at density functional theory (DFT) cost for medium‐ to large‐sized molecular clusters, (b) hybrid DFT electronic structure calculations for condensed‐phase simulations at the cost of pure density functionals, (c) reduced‐cost on‐the‐fly basis extrapolation for gas‐phase AIMD and condensed phase studies, and (d) accurate post‐HF‐level potential energy surfaces at DFT cost for quantum nuclear effects. The salient features of our approach are ONIOM‐like in that (a) the full system (cluster or condensed phase) calculation is performed at a lower level of theory (pure DFT for condensed phase or hybrid DFT for molecular systems), and (b) this approximation is improved through a correction term that captures all many‐body interactions up to any given order within a higher level of theory (hybrid DFT for condensed phase; CCSD or MP2 for cluster), combined through graph‐theoretic methods. Specifically, a region of chemical interest is coarse‐grained into a set of nodes and these nodes are then connected to form edges based on a given definition of local envelope (or threshold) of interactions. The nodes and edges together define a graph, which forms the basis for developing the many‐body expansion. The methods are demonstrated through (a) ab initio dynamics studies on protonated water clusters and polypeptide fragments, (b) potential energy surface calculations on one‐dimensional water chains such as those found in ion channels, and (c) conformational stabilization and lattice energy studies on homogeneous and heterogeneous surfaces of water with organic adsorbates using two‐dimensional periodic boundary conditions.

     
    more » « less
  5. Abstract

    The Gaussian elimination with partial pivoting (GEPP) is a classical algorithm for solving systems of linear equations. Although in specific cases the loss of precision in GEPP due to roundoff errors can be very significant, empirical evidence strongly suggests that for atypicalsquare coefficient matrix, GEPP is numerically stable. We obtain a (partial) theoretical justification of this phenomenon by showing that, given the random$$n\times n$$n×nstandard Gaussian coefficient matrixA, thegrowth factorof the Gaussian elimination with partial pivoting is at most polynomially large innwith probability close to one. This implies that with probability close to one the number of bits of precision sufficient to solve$$Ax = b$$Ax=btombits of accuracy using GEPP is$$m+O(\log n)$$m+O(logn), which improves an earlier estimate$$m+O(\log ^2 n)$$m+O(log2n)of Sankar, and which we conjecture to be optimal by the order of magnitude. We further provide tail estimates of the growth factor which can be used to support the empirical observation that GEPP is more stable than the Gaussian Elimination with no pivoting.

     
    more » « less