skip to main content


Title: Beyond quantum cluster theories: multiscale approaches for strongly correlated systems
Abstract The degrees of freedom that confer to strongly correlated systems their many intriguing properties also render them fairly intractable through typical perturbative treatments. For this reason, the mechanisms responsible for their technologically promising properties remain mostly elusive. Computational approaches have played a major role in efforts to fill this void. In particular, dynamical mean field theory and its cluster extension, the dynamical cluster approximation have allowed significant progress. However, despite all the insightful results of these embedding schemes, computational constraints, such as the minus sign problem in quantum Monte Carlo (QMC), and the exponential growth of the Hilbert space in exact diagonalization (ED) methods, still limit the length scale within which correlations can be treated exactly in the formalism. A recent advance aiming to overcome these difficulties is the development of multiscale many body approaches whereby this challenge is addressed by introducing an intermediate length scale between the short length scale where correlations are treated exactly using a cluster solver such QMC or ED, and the long length scale where correlations are treated in a mean field manner. At this intermediate length scale correlations can be treated perturbatively. This is the essence of multiscale many-body methods. We will review various implementations of these multiscale many-body approaches, the results they have produced, and the outstanding challenges that should be addressed for further advances.  more » « less
Award ID(s):
2014023 1728457
NSF-PAR ID:
10343303
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Quantum Science and Technology
Volume:
7
Issue:
3
ISSN:
2058-9565
Page Range / eLocation ID:
033001
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  2. Abstract

    Modeling and simulation is transforming modern materials science, becoming an important tool for the discovery of new materials and material phenomena, for gaining insight into the processes that govern materials behavior, and, increasingly, for quantitative predictions that can be used as part of a design tool in full partnership with experimental synthesis and characterization. Modeling and simulation is the essential bridge from good science to good engineering, spanning from fundamental understanding of materials behavior to deliberate design of new materials technologies leveraging new properties and processes. This Roadmap presents a broad overview of the extensive impact computational modeling has had in materials science in the past few decades, and offers focused perspectives on where the path forward lies as this rapidly expanding field evolves to meet the challenges of the next few decades. The Roadmap offers perspectives on advances within disciplines as diverse as phase field methods to model mesoscale behavior and molecular dynamics methods to deduce the fundamental atomic-scale dynamical processes governing materials response, to the challenges involved in the interdisciplinary research that tackles complex materials problems where the governing phenomena span different scales of materials behavior requiring multiscale approaches. The shift from understanding fundamental materials behavior to development of quantitative approaches to explain and predict experimental observations requires advances in the methods and practice in simulations for reproducibility and reliability, and interacting with a computational ecosystem that integrates new theory development, innovative applications, and an increasingly integrated software and computational infrastructure that takes advantage of the increasingly powerful computational methods and computing hardware.

     
    more » « less
  3. Quantum systems have the potential to demonstrate significant computational advantage, but current quantum devices suffer from the rapid accumulation of error that prevents the storage of quantum information over extended periods. The unintentional coupling of qubits to their environment and each other adds significant noise to computation, and improved methods to combat decoherence are required to boost the performance of quantum algorithms on real machines. While many existing techniques for mitigating error rely on adding extra gates to the circuit [ 13 , 20 , 56 ], calibrating new gates [ 50 ], or extending a circuit’s runtime [ 32 ], this article’s primary contribution leverages the gates already present in a quantum program without extending circuit duration. We exploit circuit slack for single-qubit gates that occur in idle windows, scheduling the gates such that their timing can counteract some errors. Spin-echo corrections that mitigate decoherence on idling qubits act as inspiration for this work. Theoretical models, however, fail to capture all sources of noise in Noisy Intermediate Scale Quantum devices, making practical solutions necessary that better minimize the impact of unpredictable errors in quantum machines. This article presents TimeStitch: a novel framework that pinpoints the optimum execution schedules for single-qubit gates within quantum circuits. TimeStitch, implemented as a compilation pass, leverages the reversible nature of quantum computation to boost the success of circuits on real quantum machines. Unlike past approaches that apply reversibility properties to improve quantum circuit execution [ 35 ], TimeStitch amplifies fidelity without violating critical path frontiers in either the slack tuning procedures or the final rescheduled circuit. On average, compared to a state-of-the-art baseline, a practically constrained TimeStitch achieves a mean 38% relative improvement in success rates, with a maximum of 106%, while observing bounds on circuit depth. When unconstrained by depth criteria, TimeStitch produces a mean relative fidelity increase of 50% with a maximum of 256%. Finally, when TimeStitch intelligently leverages periodic dynamical decoupling within its scheduling framework, a mean 64% improvement is observed over the baseline, relatively outperforming stand-alone dynamical decoupling by 19%, with a maximum of 287%. 
    more » « less
  4. Abstract The formation of clusters at sub-saturation densities, as a result of many-body correlations, constitutes an essential feature for a reliable modelization of the nuclear matter equation of state (EoS). Phenomenological models that make use of energy density functionals (EDFs) offer a convenient approach to account for the presence of these bound states of nucleons when introduced as additional degrees of freedom. However, in these models clusters dissolve, by construction, when the nuclear saturation density is approached from below, revealing inconsistencies with recent findings that evidence the existence of short-range correlations (SRCs) even at larger densities. The idea of this work is to incorporate SRCs in established models for the EoS, in light of the importance of these features for the description of heavy-ion collisions, nuclear structure and in the astrophysical context. Our aim is to describe SRCs at supra-saturation densities by using effective quasi-clusters immersed in dense matter as a surrogate for correlations, in a regime where cluster dissolution is usually predicted in phenomenological models. Within the EDF framework, we explore a novel approach to embed SRCs within a relativistic mean-field model with density dependent couplings through the introduction of suitable in-medium modifications of the cluster properties, in particular their binding energy shifts, which are responsible for describing the cluster dissolution. As a first exploratory step, the example of a quasi-deuteron within the generalized relativistic density functional approach is investigated. The zero temperature case is examined, where the deuteron fraction is given by the density of a boson condensate. For the first time, suitable parameterizations of the cluster mass shift at zero temperature are derived for all baryon densities. They are constrained by experimental results for the effective deuteron fraction in nuclear matter near saturation and by microscopic many-body calculations in the low-density limit. A proper description of well-constrained nuclear matter quantities at saturation is kept through a refit of the nucleon meson coupling strengths. The proposed parameterizations allow to also determine the density dependence of the quasi-deuteron mass fraction at arbitrary isospin asymmetries. The strength of the deuteron-meson couplings is assessed to be of crucial importance. Novel effects on some thermodynamic quantities, such as the matter incompressibility, the symmetry energy and its slope, are finally discerned and discussed. The findings of the present study represent a first step to improve the description of nuclear matter and its EoS at supra-saturation densities in EDFs by considering correlations in an effective way. In a next step, the single-particle momentum distributions in nuclear matter can be explored using proper wave functions of the quasi-deuteron in the medium. The momentum distributions are expected to exhibit a high-momentum tail, as observed in the experimental study of SRCs by nucleon knockout with high-energy electrons. This will be studied in a forthcoming publication with an extensive presentation of the theoretical method and the results. 
    more » « less
  5. Abstract We consider Bayesian inference for large-scale inverse problems, where computational challenges arise from the need for repeated evaluations of an expensive forward model. This renders most Markov chain Monte Carlo approaches infeasible, since they typically require O ( 1 0 4 ) model runs, or more. Moreover, the forward model is often given as a black box or is impractical to differentiate. Therefore derivative-free algorithms are highly desirable. We propose a framework, which is built on Kalman methodology, to efficiently perform Bayesian inference in such inverse problems. The basic method is based on an approximation of the filtering distribution of a novel mean-field dynamical system, into which the inverse problem is embedded as an observation operator. Theoretical properties are established for linear inverse problems, demonstrating that the desired Bayesian posterior is given by the steady state of the law of the filtering distribution of the mean-field dynamical system, and proving exponential convergence to it. This suggests that, for nonlinear problems which are close to Gaussian, sequentially computing this law provides the basis for efficient iterative methods to approximate the Bayesian posterior. Ensemble methods are applied to obtain interacting particle system approximations of the filtering distribution of the mean-field model; and practical strategies to further reduce the computational and memory cost of the methodology are presented, including low-rank approximation and a bi-fidelity approach. The effectiveness of the framework is demonstrated in several numerical experiments, including proof-of-concept linear/nonlinear examples and two large-scale applications: learning of permeability parameters in subsurface flow; and learning subgrid-scale parameters in a global climate model. Moreover, the stochastic ensemble Kalman filter and various ensemble square-root Kalman filters are all employed and are compared numerically. The results demonstrate that the proposed method, based on exponential convergence to the filtering distribution of a mean-field dynamical system, is competitive with pre-existing Kalman-based methods for inverse problems. 
    more » « less