skip to main content


Title: Time-of-Flight Quantum Tomography of Single Atom Motion
Time of flight is an intuitive way to determine the velocity of particles and lies at the heart of many capabilities ranging from mass spectrometry to fluid flow measurements. Here we show time-of-flight imaging can realize tomography of a quantum state of motion of a single trapped atom. Tomography of motion requires studying the phase space spanned by both position and momentum. By combining time-of-flight imaging with coherent evolution of the atom in an optical tweezer trap, we are able to access arbitrary quadratures in phase space without relying on coupling to a spin degree of freedom. To create non-classical motional states, we harness quantum tunneling in the versatile potential landscape of optical tweezers, and our tomography both demonstrates Wigner function negativity and assesses coherence of non-stationary states. Our demonstrated tomography concept has wide applicability to a range of particles and will enable characterization of non-classical states of more complex systems or massive dielectric particles.  more » « less
Award ID(s):
2016244
NSF-PAR ID:
10340275
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
ArXivorg
ISSN:
2331-8422
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  2. Abstract An ensemble of atoms can operate as a quantum sensor by placing atoms in a superposition of two different states. Upon measurement of the sensor, each atom is individually projected into one of the two states. Creating quantum correlations between the atoms, that is entangling them, could lead to resolutions surpassing the standard quantum limit 1–3  set by projections of individual atoms. Large amounts of entanglement 4–6 involving the internal degrees of freedom of laser-cooled atomic ensembles 4–16 have been generated in collective cavity quantum-electrodynamics systems, in which many atoms simultaneously interact with a single optical cavity mode. Here we report a matter-wave interferometer in a cavity quantum-electrodynamics system of 700 atoms that are entangled in their external degrees of freedom. In our system, each individual atom falls freely under gravity and simultaneously traverses two paths through space while entangled with the other atoms. We demonstrate both quantum non-demolition measurements and cavity-mediated spin interactions for generating squeezed momentum states with directly observed sensitivity $$3\,.\,{4}_{-0.9}^{+1.1}$$ 3 . 4 − 0.9 + 1.1  dB and $$2\,.\,{5}_{-0.6}^{+0.6}$$ 2 . 5 − 0.6 + 0.6  dB below the standard quantum limit, respectively. We successfully inject an entangled state into a Mach–Zehnder light-pulse interferometer with directly observed sensitivity $$1\,.\,{7}_{-0.5}^{+0.5}$$ 1 . 7 − 0.5 + 0.5  dB below the standard quantum limit. The combination of particle delocalization and entanglement in our approach may influence developments of enhanced inertial sensors 17,18 , searches for new physics, particles and fields 19–23 , future advanced gravitational wave detectors 24,25 and accessing beyond mean-field quantum many-body physics 26–30 . 
    more » « less
  3. Abstract

    Emergence of fundamental forces from gauge symmetry is among our most profound insights about the physical universe. In nature, such symmetries remain hidden in the space of internal degrees of freedom of subatomic particles. Here we propose a way to realize and study gauge structures in real space, manifest in external degrees of freedom of quantum states. We present a model based on a ring-shaped lattice potential, which allows for both Abelian and non-Abelian constructs. Non trivial Wilson loops are shown possible via physical motion of the system. The underlying physics is based on the close analogy of geometric phase with gauge potentials that has been utilized to create synthetic gauge fields with internal states of ultracold atoms. By scaling up to an array with spatially varying parameters, a discrete gauge field can be realized in position space, and its dynamics mapped over macroscopic size and time scales.

     
    more » « less
  4. Particles placed inside an Abelian (commutative) gauge field can acquire different phases when traveling along the same path in opposite directions, as is evident from the Aharonov-Bohm effect. Such behaviors can get significantly enriched for a non-Abelian gauge field, where even the ordering of different paths cannot be switched. So far, real-space realizations of gauge fields have been limited to Abelian ones. We report an experimental synthesis of non-Abelian gauge fields in real space and the observation of the non-Abelian Aharonov-Bohm effect with classical waves and classical fluxes. On the basis of optical mode degeneracy, we break time-reversal symmetry in different manners, via temporal modulation and the Faraday effect, to synthesize tunable non-Abelian gauge fields. The Sagnac interference of two final states, obtained by reversely ordered path integrals, demonstrates the noncommutativity of the gauge fields. Our work introduces real-space building blocks for non-Abelian gauge fields, relevant for classical and quantum exotic topological phenomena. 
    more » « less
  5. BACKGROUND Optical sensing devices measure the rich physical properties of an incident light beam, such as its power, polarization state, spectrum, and intensity distribution. Most conventional sensors, such as power meters, polarimeters, spectrometers, and cameras, are monofunctional and bulky. For example, classical Fourier-transform infrared spectrometers and polarimeters, which characterize the optical spectrum in the infrared and the polarization state of light, respectively, can occupy a considerable portion of an optical table. Over the past decade, the development of integrated sensing solutions by using miniaturized devices together with advanced machine-learning algorithms has accelerated rapidly, and optical sensing research has evolved into a highly interdisciplinary field that encompasses devices and materials engineering, condensed matter physics, and machine learning. To this end, future optical sensing technologies will benefit from innovations in device architecture, discoveries of new quantum materials, demonstrations of previously uncharacterized optical and optoelectronic phenomena, and rapid advances in the development of tailored machine-learning algorithms. ADVANCES Recently, a number of sensing and imaging demonstrations have emerged that differ substantially from conventional sensing schemes in the way that optical information is detected. A typical example is computational spectroscopy. In this new paradigm, a compact spectrometer first collectively captures the comprehensive spectral information of an incident light beam using multiple elements or a single element under different operational states and generates a high-dimensional photoresponse vector. An advanced algorithm then interprets the vector to achieve reconstruction of the spectrum. This scheme shifts the physical complexity of conventional grating- or interference-based spectrometers to computation. Moreover, many of the recent developments go well beyond optical spectroscopy, and we discuss them within a common framework, dubbed “geometric deep optical sensing.” The term “geometric” is intended to emphasize that in this sensing scheme, the physical properties of an unknown light beam and the corresponding photoresponses can be regarded as points in two respective high-dimensional vector spaces and that the sensing process can be considered to be a mapping from one vector space to the other. The mapping can be linear, nonlinear, or highly entangled; for the latter two cases, deep artificial neural networks represent a natural choice for the encoding and/or decoding processes, from which the term “deep” is derived. In addition to this classical geometric view, the quantum geometry of Bloch electrons in Hilbert space, such as Berry curvature and quantum metrics, is essential for the determination of the polarization-dependent photoresponses in some optical sensors. In this Review, we first present a general perspective of this sensing scheme from the viewpoint of information theory, in which the photoresponse measurement and the extraction of light properties are deemed as information-encoding and -decoding processes, respectively. We then discuss demonstrations in which a reconfigurable sensor (or an array thereof), enabled by device reconfigurability and the implementation of neural networks, can detect the power, polarization state, wavelength, and spatial features of an incident light beam. OUTLOOK As increasingly more computing resources become available, optical sensing is becoming more computational, with device reconfigurability playing a key role. On the one hand, advanced algorithms, including deep neural networks, will enable effective decoding of high-dimensional photoresponse vectors, which reduces the physical complexity of sensors. Therefore, it will be important to integrate memory cells near or within sensors to enable efficient processing and interpretation of a large amount of photoresponse data. On the other hand, analog computation based on neural networks can be performed with an array of reconfigurable devices, which enables direct multiplexing of sensing and computing functions. We anticipate that these two directions will become the engineering frontier of future deep sensing research. On the scientific frontier, exploring quantum geometric and topological properties of new quantum materials in both linear and nonlinear light-matter interactions will enrich the information-encoding pathways for deep optical sensing. In addition, deep sensing schemes will continue to benefit from the latest developments in machine learning. Future highly compact, multifunctional, reconfigurable, and intelligent sensors and imagers will find applications in medical imaging, environmental monitoring, infrared astronomy, and many other areas of our daily lives, especially in the mobile domain and the internet of things. Schematic of deep optical sensing. The n -dimensional unknown information ( w ) is encoded into an m -dimensional photoresponse vector ( x ) by a reconfigurable sensor (or an array thereof), from which w′ is reconstructed by a trained neural network ( n ′ = n and w′   ≈   w ). Alternatively, x may be directly deciphered to capture certain properties of w . Here, w , x , and w′ can be regarded as points in their respective high-dimensional vector spaces ℛ n , ℛ m , and ℛ n ′ . 
    more » « less