skip to main content


Title: Bell inequalities for entangled qubits: quantitative tests of quantum character and nonlocality on quantum computers
This work provides quantitative tests of the extent of violation of two inequalities applicable to qubits coupled into Bell states, using IBM's publicly accessible quantum computers. Violations of the inequalities are well established. Our purpose is not to test the inequalities, but rather to determine how well quantum mechanical predictions can be reproduced on quantum computers, given their current fault rates. We present results for the spin projections of two entangled qubits, along three axes A , B , and C , with a fixed angle θ between A and B and a range of angles θ ′ between B and C . For any classical object that can be characterized by three observables with two possible values, inequalities govern relationships among the probabilities of outcomes for the observables, taken pairwise. From set theory, these inequalities must be satisfied by all such classical objects; but quantum systems may violate the inequalities. We have detected clear-cut violations of one inequality in runs on IBM's publicly accessible quantum computers. The Clauser–Horne–Shimony–Holt (CHSH) inequality governs a linear combination S of expectation values of products of spin projections, taken pairwise. Finding S > 2 rules out local, hidden variable theories for entangled quantum systems. We obtained values of S greater than 2 in our runs prior to error mitigation. To reduce the quantitative errors, we used a modification of the error-mitigation procedure in the IBM documentation. We prepared a pair of qubits in the state |00〉, found the probabilities to observe the states |00〉, |01〉, |10〉, and |11〉 in multiple runs, and used that information to construct the first column of an error matrix M . We repeated this procedure for states prepared as |01〉, |10〉, and |11〉 to construct the full matrix M , whose inverse is the filtering matrix. After applying filtering matrices to our averaged outcomes, we have found good quantitative agreement between the quantum computer output and the quantum mechanical predictions for the extent of violation of both inequalities as functions of θ ′.  more » « less
Award ID(s):
1900399
NSF-PAR ID:
10271582
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Physical Chemistry Chemical Physics
Volume:
23
Issue:
11
ISSN:
1463-9076
Page Range / eLocation ID:
6370 to 6387
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Using IBM's publicly accessible quantum computers, we have analyzed the entropies of Schrödinger's cat states, which have the form Ψ = (1/2) 1/2 [|0 0 0⋯0〉 + |1 1 1⋯1〉]. We have obtained the average Shannon entropy S So of the distribution over measurement outcomes from 75 runs of 8192 shots, for each of the numbers of entangled qubits, on each of the quantum computers tested. For the distribution over N fault-free measurements on pure cat states, S So would approach one as N → ∞, independent of the number of qubits; but we have found that S So varies nearly linearly with the number of qubits n . The slope of S So versus the number of qubits differs among computers with the same quantum volumes. We have developed a two-parameter model that reproduces the near-linear dependence of the entropy on the number of qubits, based on the probabilities of observing the output 0 when a qubit is set to |0〉 and 1 when it is set to |1〉. The slope increases as the error rate increases. The slope provides a sensitive measure of the accuracy of a quantum computer, so it serves as a quickly determinable index of performance. We have used tomographic methods with error mitigation as described in the qiskit documentation to find the density matrix ρ and evaluate the von Neumann entropies of the cat states. From the reduced density matrices for individual qubits, we have calculated the entanglement entropies. The reduced density matrices represent mixed states with approximately 50/50 probabilities for states |0〉 and |1〉. The entanglement entropies are very close to one. 
    more » « less
  2. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  3. Abstract

    Qudit entanglement is an indispensable resource for quantum information processing since increasing dimensionality provides a pathway to higher capacity and increased noise resilience in quantum communications, and cluster-state quantum computations. In continuous-variable time–frequency entanglement, encoding multiple qubits per photon is only limited by the frequency correlation bandwidth and detection timing jitter. Here, we focus on the discrete-variable time–frequency entanglement in a biphoton frequency comb (BFC), generating by filtering the signal and idler outputs with a fiber Fabry–Pérot cavity with 45.32 GHz free-spectral range (FSR) and 1.56 GHz full-width-at-half-maximum (FWHM) from a continuous-wave (cw)-pumped type-II spontaneous parametric downconverter (SPDC). We generate a BFC whose time-binned/frequency-binned Hilbert space dimensionality is at least 324, based on the assumption of a pure state. Such BFC’s dimensionality doubles up to 648, after combining with its post-selected polarization entanglement, indicating a potential 6.28 bits/photon classical-information capacity. The BFC exhibits recurring Hong–Ou–Mandel (HOM) dips over 61 time bins with a maximum visibility of 98.4% without correction for accidental coincidences. In a post-selected measurement, it violates the Clauser–Horne–Shimony–Holt (CHSH) inequality for polarization entanglement by up to 18.5 standard deviations with anS-parameter of up to 2.771. It has Franson interference recurrences in 16 time bins with a maximum visibility of 96.1% without correction for accidental coincidences. From the zeroth- to the third-order Franson interference, we infer an entanglement of formation (Eof) up to 1.89 ± 0.03 ebits—where 2 ebits is the maximal entanglement for a 4 × 4 dimensional biphoton—as a lower bound on the 61 time-bin BFC’s high-dimensional entanglement. To further characterize time-binned/frequency-binned BFCs we obtain Schmidt mode decompositions of BFCs generated using cavities with 45.32, 15.15, and 5.03 GHz FSRs. These decompositions confirm the time–frequency scaling from Fourier-transform duality. Moreover, we present the theory of conjugate Franson interferometry—because it is characterized by the state’s joint-temporal intensity (JTI)—which can further help to distinguish between pure-state BFC and mixed state entangled frequency pairs, although the experimental implementation is challenging and not yet available. In summary, our BFC serves as a platform for high-dimensional quantum information processing and high-dimensional quantum key distribution (QKD).

     
    more » « less
  4. Recent constructions of quantum low-density parity-check (QLDPC) codes provide optimal scaling of the number of logical qubits and the minimum distance in terms of the code length, thereby opening the door to fault-tolerant quantum systems with minimal resource overhead. However, the hardware path from nearest-neighbor-connection-based topological codes to long-range-interaction-demanding QLDPC codes is likely a challenging one. Given the practical difficulty in building a monolithic architecture for quantum systems, such as computers, based on optimal QLDPC codes, it is worth considering a distributed implementation of such codes over a network of interconnected medium-sized quantum processors. In such a setting, all syndrome measurements and logical operations must be performed through the use of high-fidelity shared entangled states between the processing nodes. Since probabilistic many-to-1 distillation schemes for purifying entanglement are inefficient, we investigate quantum error correction based entanglement purification in this work. Specifically, we employ QLDPC codes to distill GHZ states, as the resulting high-fidelity logical GHZ states can interact directly with the code used to perform distributed quantum computing (DQC), e.g. for fault-tolerant Steane syndrome extraction. This protocol is applicable beyond the application of DQC since entanglement distribution and purification is a quintessential task of any quantum network. We use the min-sum algorithm (MSA) based iterative decoder with a sequential schedule for distilling3-qubit GHZ states using a rate0.118family of lifted product QLDPC codes and obtain an input fidelity threshold of0.7974under i.i.d. single-qubit depolarizing noise. This represents the best threshold for a yield of0.118for any GHZ purification protocol. Our results apply to larger size GHZ states as well, where we extend our technical result about a measurement property of3-qubit GHZ states to construct a scalable GHZ purification protocol.

     
    more » « less
  5. Abstract Indistinguishability of particles is a fundamental principle of quantum mechanics 1 . For all elementary and quasiparticles observed to date—including fermions, bosons and Abelian anyons—this principle guarantees that the braiding of identical particles leaves the system unchanged 2,3 . However, in two spatial dimensions, an intriguing possibility exists: braiding of non-Abelian anyons causes rotations in a space of topologically degenerate wavefunctions 4–8 . Hence, it can change the observables of the system without violating the principle of indistinguishability. Despite the well-developed mathematical description of non-Abelian anyons and numerous theoretical proposals 9–22 , the experimental observation of their exchange statistics has remained elusive for decades. Controllable many-body quantum states generated on quantum processors offer another path for exploring these fundamental phenomena. Whereas efforts on conventional solid-state platforms typically involve Hamiltonian dynamics of quasiparticles, superconducting quantum processors allow for directly manipulating the many-body wavefunction by means of unitary gates. Building on predictions that stabilizer codes can host projective non-Abelian Ising anyons 9,10 , we implement a generalized stabilizer code and unitary protocol 23 to create and braid them. This allows us to experimentally verify the fusion rules of the anyons and braid them to realize their statistics. We then study the prospect of using the anyons for quantum computation and use braiding to create an entangled state of anyons encoding three logical qubits. Our work provides new insights about non-Abelian braiding and, through the future inclusion of error correction to achieve topological protection, could open a path towards fault-tolerant quantum computing. 
    more » « less