Abstract Many quantum algorithms are developed to evaluate eigenvalues for Hermitian matrices. However, few practical approach exists for the eigenanalysis of non-Hermintian ones, such as arising from modern power systems. The main difficulty lies in the fact that, as the eigenvector matrix of a general matrix can be non-unitary, solving a general eigenvalue problem is inherently incompatible with existing unitary-gate-based quantum methods. To fill this gap, this paper introduces a Variational Quantum Universal Eigensolver (VQUE), which is deployable on noisy intermediate scale quantum computers. Our new contributions include: (1) The first universal variational quantum algorithm capable of evaluating the eigenvalues of non-Hermitian matrices—Inspired by Schur’s triangularization theory, VQUE unitarizes the eigenvalue problem to a procedure of searching unitary transformation matrices via quantum devices; (2) A Quantum Process Snapshot technique is devised to make VQUE maintain the potential quantum advantage inherited from the original variational quantum eigensolver—With additional$$O(log_{2}{N})$$ quantum gates, this method efficiently identifies whether a unitary operator is triangular with respect to a given basis; (3) Successful deployment and validation of VQUE on a real noisy quantum computer, which demonstrates the algorithm’s feasibility. We also undertake a comprehensive parametric study to validate VQUE’s scalability, generality, and performance in realistic applications.
more »
« less
Chemistry: The coupling science
We make the case for an enhanced adoption of matrix algebra in undergraduate chemical curriculum by laying out an example-driven perspective of Chemistry as a discipline that focuses on interactions—couplings—among various microscopic entities. Many Physical Chemistry textbooks and courses emphasize an operator-driven approach to Quantum Chemistry, favoring it over the equivalent matrix formalism. For example, one particularly popular textbook, does not even mention matrices until the discussion of the Hückel molecular-orbital theory (MO). We argue that educators’ adherence to the operator-only approach misses a pedagogical opportunity to help create a highly beneficial parallel framework of Chemistry in learners’ minds. This missing framework would conceptualize early on that Chemistry is not something that happens to stand-alone electrons, atoms, or molecules. Instead, Chemistry is all about interactions. The easiest—and most intuitive—way to describe many types of interactions mathematically is by using matrices. Many students and educators shy away from them, but matrices can be easily and intuitively understood as simply interaction or coupling tables. To a beginning learner’s brain, the idea of a table is much less abstract than that of an operator. Yet tables (i.e., matrices) can be used as simple tools for building powerful conceptual frameworks for describing chemical forces using fairly simple algebra instead of differential and integral calculus inherent in the operator representation. We will discuss several well- and less-well-known applications of matrices in chemistry, including a Fourier view of quantum confinement, vibrational mode couplings, and MO theory. In particular, we will describe a new density-matrix adaptation of the Hückel MO theory to general bonding scenarios in which the original Hückel model simply does not apply.
more »
« less
- Award ID(s):
- 2153986
- PAR ID:
- 10580879
- Publisher / Repository:
- 2024 Biennial Conference on Chemical Education
- Date Published:
- Format(s):
- Medium: X
- Location:
- Lexington, KY
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Many areas of machine learning and science involve large linear algebra problems, such as eigendecompositions, solving linear systems, computing matrix exponentials, and trace estimation. The matrices involved often have Kronecker, convolutional, block diagonal, sum, or product structure. In this paper, we propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA (Compositional Linear Algebra). By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms. Moreover, CoLA provides memory efficient automatic differentiation, low precision computation, and GPU acceleration in both JAX and PyTorch, while also accommodating new objects, operations, and rules in downstream packages via multiple dispatch. CoLA can accelerate many algebraic operations, while making it easy to prototype matrix structures and algorithms, providing an appealing drop-in tool for virtually any computational effort that requires linear algebra. We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.more » « less
-
Many areas of machine learning and science involve large linear algebra problems, such as eigendecompositions, solving linear systems, computing matrix exponentials, and trace estimation. The matrices involved often have Kronecker, convolutional, block diagonal, sum, or product structure. In this paper, we propose a simple but general framework for large-scale linear algebra problems in machine learning, named CoLA (Compositional Linear Algebra). By combining a linear operator abstraction with compositional dispatch rules, CoLA automatically constructs memory and runtime efficient numerical algorithms. Moreover, CoLA provides memory efficient automatic differentiation, low precision computation, and GPU acceleration in both JAX and PyTorch, while also accommodating new objects, operations, and rules in downstream packages via multiple dispatch. CoLA can accelerate many algebraic operations, while making it easy to prototype matrix structures and algorithms, providing an appealing drop-in tool for virtually any computational effort that requires linear algebra. We showcase its efficacy across a broad range of applications, including partial differential equations, Gaussian processes, equivariant model construction, and unsupervised learning.more » « less
-
We consider the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Our main results show that for a range of linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs, several of which are tight for every space bound, also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T = Ω(n^2/S) to compute matrix-vector product Ax for x ∈ {0, 1}^n. We similarly prove that matrix multiplication for n × n binary matrices requires T = Ω(n^3/√S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems with any space bound. We obtain matching lower bounds for the stronger notion of quantum cumulative memory complexity—the sum of the space per layer of a circuit. We also consider Boolean (i.e. AND-OR) matrix multiplication and matrix-vector products, improving the previous quantum time-space tradeoff lower bounds for n × n Boolean matrix multiplication to T = Ω(n^{2.5}/S^{1/4}) from T = Ω(n^{2.5}/S^{1/2}). Our improved lower bound for Boolean matrix multiplication is based on a new coloring argument that extracts more from the strong direct product theorem that was the basis for prior work. To obtain our tight lower bounds for linear algebra problems, we require much stronger bounds than strong direct product theorems. We obtain these bounds by adding a new bucketing method to the quantum recording-query technique of Zhandry that lets us apply classical arguments to upper bound the success probability of quantum circuits.more » « less
-
We prove lower bounds on the time and space required for quantum computers to solve a wide variety of problems involving matrices, many of which have only been analyzed classically in prior work. Using a novel way of applying recording query methods we show that for many linear algebra problems—including matrix-vector product, matrix inversion, matrix multiplication and powering—existing classical time-space tradeoffs also apply to quantum algorithms with at most a constant factor loss. For example, for almost all fixed matrices A, including the discrete Fourier transform (DFT) matrix, we prove that quantum circuits with at most T input queries and S qubits of memory require T=Ω(n^2/S) to compute matrix-vector product Ax for x ∈ {0,1}^n. We similarly prove that matrix multiplication for nxn binary matrices requires T=Ω(n^3/√S). Because many of our lower bounds are matched by deterministic algorithms with the same time and space complexity, our results show that quantum computers cannot provide any asymptotic advantage for these problems at any space bound. We also improve the previous quantum time-space tradeoff lower bounds for n× n Boolean (i.e. AND-OR) matrix multiplication from T=Ω(n^2.5/S^0.5) to T=Ω(n^2.5/S^0.25) which has optimal exponents for the powerful query algorithms to which it applies. Our method also yields improved lower bounds for classical algorithms.more » « less
An official website of the United States government

