skip to main content


This content will become publicly available on May 26, 2024

Title: Noncommutativity and physics: a non-technical review
Abstract We give an overview of the applications of noncommutative geometry to physics. Our focus is entirely on the conceptual ideas, rather than on the underlying technicalities. Starting historically from the Heisenberg relations, we will explain how in general noncommutativity yields a canonical time evolution, while at the same time allowing for the coexistence of discrete and continuous variables. The spectral approach to geometry is then explained to encompass two natural ingredients: the line element and the algebra. The relation between these two is dictated by so-called higher Heisenberg relations, from which both spin geometry and non-abelian gauge theory emerges. Our exposition indicates some of the applications in physics, including Pati–Salam unification beyond the Standard Model, the criticality of dimension 4, second quantization and entropy.  more » « less
Award ID(s):
2207663
NSF-PAR ID:
10426448
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
The European Physical Journal Special Topics
ISSN:
1951-6355
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  2. Linear quantum measurements with independent particles are bounded by the standard quantum limit, which limits the precision achievable in estimating unknown phase parameters. The standard quantum limit can be overcome by entangling the particles, but the sensitivity is often limited by the final state readout, especially for complex entangled many-body states with non-Gaussian probability distributions. Here, by implementing an effective time-reversal protocol in an optically engineered many-body spin Hamiltonian, we demonstrate a quantum measurement with non-Gaussian states with performance beyond the limit of the readout scheme. This signal amplification through a time-reversed interaction achieves the greatest phase sensitivity improvement beyond the standard quantum limit demonstrated to date in any full Ramsey interferometer. These results open the field of robust time-reversal-based measurement protocols offering precision not too far from the Heisenberg limit. Potential applications include quantum sensors that operate at finite bandwidth, and the principle we demonstrate may also advance areas such as quantum engineering, quantum measurements and the search for new physics using optical-transition atomic clocks. 
    more » « less
  3. Abstract We carry out a comparative analysis of the relation between the mass of supermassive black holes (BHs) and the stellar mass of their host galaxies at 0.2 < z < 1.7 using well-matched observations and multiple state-of-the-art simulations (e.g., MassiveBlackII, Horizon-AGN, Illustris, TNG, and a semianalytic model). The observed sample consists of 646 uniformly selected Sloan Digital Sky Survey quasars (0.2 < z < 0.8) and 32 broad-line active galactic nuclei (AGNs; 1.2 < z < 1.7) with imaging from Hyper Suprime-Cam (HSC) for the former and Hubble Space Telescope (HST) for the latter. We first add realistic observational uncertainties to the simulation data and then construct a simulated sample in the same manner as the observations. Over the full redshift range, our analysis demonstrates that all simulations predict a level of intrinsic scatter of the scaling relations comparable to the observations that appear to agree with the dispersion of the local relation. Regarding the mean relation, Horizon-AGN and TNG are in closest agreement with the observations at low and high redshift ( z ∼ 0.2 and 1.5, respectively), while the other simulations show subtle differences within the uncertainties. For insight into the physics involved, the scatter of the scaling relation, seen in the SAM, is reduced by a factor of two and closer to the observations after adopting a new feedback model that considers the geometry of the AGN outflow. The consistency in the dispersion with redshift in our analysis supports the importance of both quasar- and radio-mode feedback prescriptions in the simulations. Finally, we highlight the importance of increasing the sensitivity (e.g., using the James Webb Space Telescope), thereby pushing to lower masses and minimizing biases due to selection effects. 
    more » « less
  4. In two-phase materials, each phase having a non-local response in time, it has been found that for some driving fields the response somehow untangles at specific times, and allows one to directly infer useful information about the geometry of the material, such as the volume fractions of the phases. Motivated by this, and to obtain an algorithm for designing appropriate driving fields, we find approximate, measure independent, linear relations between the values that Markov functions take at a given set of possibly complex points, not belonging to the interval [-1,1] where the measure is supported. The problem is reduced to simply one of polynomial approximation of a given function on the interval [-1,1] and, to simplify the analysis, Chebyshev approximation is used. This allows one to obtain explicit estimates of the error of the approximation, in terms of the number of points and the minimum distance of the points to the interval [-1,1]. Assuming this minimum distance is bounded below by a number greater than 1/2, the error converges exponentially to zero as the number of points is increased. Approximate linear relations are also obtained that incorporate a set of moments of the measure. In the context of the motivating problem, the analysis also yields bounds on the response at any particular time for any driving field, and allows one to estimate the response at a given frequency using an appropriately designed driving field that effectively is turned on only for a fixed interval of time. The approximation extends directly to Markov-type functions with a positive semidefinite operator valued measure, and this has applications to determining the shape of an inclusion in a body from boundary flux measurements at a specific time, when the time-dependent boundary potentials are suitably tailored. 
    more » « less
  5. Predictive modeling in physical science and engineering is mostly based on solving certain partial differential equations where the complexity of solutions is dictated by the geometry of the domain. Motivated by the broad applications of explicit solutions for spherical and ellipsoidal domains, in particular, the Eshelby’s solution in elasticity, we propose a generalization of ellipsoidal shapes called polynomial inclusions. A polynomial inclusion (or -inclusion for brevity) of degree is defined as a smooth, connected and bounded body whose Newtonian potential is a polynomial of degree inside the body. From this viewpoint, ellipsoids are identified as the only -inclusions of degree two; many fundamental problems in various physical settings admit simple closed-form solutions for general -inclusions as for ellipsoids. Therefore, we anticipate that -inclusions will be useful for applications including predictive materials models, optimal designs, and inverse problems. However, the existence of p-inclusions beyond degree two is not obvious, not to mention their explicit algebraic parameterizations. In this work, we explore alternative definitions and properties of p-inclusions in the context of potential theory. Based on the theory of variational inequalities, we show that -inclusions do exist for certain polynomials, though a complete characterization remains open. We reformulate the determination of surfaces of -inclusions as nonlocal geometric flows which are convenient for numerical simulations and studying geometric properties of -inclusions. In two dimensions, by the method of conformal mapping we find an explicit algebraic parameterization of p-inclusions. We also propose a few open problems whose solution will deepen our understanding of relations between domain geometry, Newtonian potentials, and solutions to general partial differential equations. We conclude by presenting examples of applications of -inclusions in the context of Eshelby inclusion problems and magnet designs. 
    more » « less