Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

Shadow tomography is a framework for constructing succinct descriptions of quantum states using randomized measurement bases, called “classical shadows,” with powerful methods to bound the estimators used. We recast existing experimental protocols for continuousvariable quantum state tomography in the classicalshadow framework, obtaining rigorous bounds on the number of independent measurements needed for estimating density matrices from these protocols. We analyze the efficiency of homodyne, heterodyne, photonnumberresolving, and photonparity protocols. To reach a desired precision on the classical shadow of an Nphoton density matrix with high probability, we show that homodyne detection requires order O(N4+1/3) measurements in the worst case, whereas photonnumberresolving and photonparity detection require O(N4) measurements in the worst case (both up to logarithmic corrections). We benchmark these results against numerical simulation as well as experimental data from optical homodyne experiments. We find that numerical and experimental analyses of homodyne tomography match closely with our theoretical predictions. We extend our singlemode results to an efficient construction of multimode shadows based on local measurements.more » « lessFree, publiclyaccessible full text available March 18, 2025

We generalize the notion of quantum state designs to infinitedimensional spaces. We first prove that, under the definition of continuousvariable (CV) state tdesigns from [BlumeKohout et al., Commun.Math. Phys. 326, 755 (2014)], no state designs exist for t ≥ 2. Similarly, we prove that no CV unitary tdesigns exist for t ≥ 2. We propose an alternative definition for CV state designs, which we call rigged tdesigns, and provide explicit constructions for t ¼ 2. As an application of rigged designs, we develop a designbased shadowtomography protocol for CV states. Using energyconstrained versions of rigged designs, we define an average fidelity for CV quantum channels and relate this fidelity to the CV entanglement fidelity. As an additional result of independent interest, we establish a connection between torus 2designs and complete sets of mutually unbiased bases.more » « lessFree, publiclyaccessible full text available February 8, 2025

Detection of very weak forces and precise measurement of time are two of the many applications of quantum metrology to science and technology. To sense an unknown physical parameter, one prepares an initial state of a probe system, allows the probe to evolve as governed by a Hamiltonian H for some time t, and then measures the probe. If H is known, we can estimate t by this method; if t is known, we can estimate classical parameters on which H depends. The accuracy of a quantum sensor can be limited by either intrinsic quantum noise or by noise arising from the interactions of the probe with its environment. In this work, we introduce and study a fundamental tradeoff, which relates the amount by which noise reduces the accuracy of a quantum clock to the amount of information about the energy of the clock that leaks to the environment. Specifically, we consider an idealized scenario in which a party Alice prepares an initial pure state of the clock, allows the clock to evolve for a time that is not precisely known, and then transmits the clock through a noisy channel to a party Bob. Meanwhile, the environment (Eve) receives any information about the clock that is lost during transmission. We prove that Bob’s loss of quantum Fisher information about the elapsed time is equal to Eve’s gain of quantum Fisher information about a complementary energy parameter. We also prove a similar, but more general, tradeoff that applies when Bob and Eve wish to estimate the values of parameters associated with two noncommuting observables. We derive the necessary and sufficient conditions for the accuracy of the clock to be unaffected by the noise, which form a subset of the KnillLaflamme errorcorrection conditions. A state and its local timeevolution direction, if they satisfy these conditions, are said to form a metrological code. We provide a scheme to construct metrological codes in the stabilizer formalism. We show that there are metrological codes that cannot be written as a quantum errorcorrecting code with similar distance in which the Hamiltonian acts as a logical operator, potentially offering new schemes for constructing states that do not lose any sensitivity upon application of a noisy channel. We discuss applications of the tradeoff relation to sensing using a quantum manybody probe subject to erasure or amplitudedamping noise.more » « lessFree, publiclyaccessible full text available December 5, 2024

Concatenating bosonic errorcorrecting codes with qubit codes can substantially boost the error correcting power of the original qubit codes. It is not clear how to concatenate optimally, given that there are several bosonic codes and concatenation schemes to choose from, including the recently discovered GottesmanKitaevPreskill (GKP) – stabilizer codes [Phys. Rev. Lett. 125, 080503 (2020)] that allow protection of a logical bosonic mode from fluctuations of the conjugate variables of the mode. We develop efficient maximumlikelihood decoders for and analyze the performance of three different concatenations of codes taken from the following set: qubit stabilizer codes, analog or Gaussian stabilizer codes, GKP codes, and GKPstabilizer codes. We benchmark decoder performance against additive Gaussian white noise, corroborating our numerics with analytical calculations. We observe that the concatenation involving GKPstabilizer codes outperforms the more conventional concatenation of a qubit stabilizer code with a GKP code in some cases. We also propose a GKPstabilizer code that suppresses fluctuations in both conjugate variables without extra quadrature squeezing and formulate qudit versions of GKPstabilizer codes.more » « less

INTRODUCTION Solving quantum manybody problems, such as finding ground states of quantum systems, has farreaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, faulttolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum manybody physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediatesize experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform nonML algorithms in challenging quantum manybody problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict groundstate properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexitytheoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum manybody problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; groundstate properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomialtime (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomialtime classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, twodimensional random Heisenberg models, symmetryprotected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum manybody problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of nearterm quantum platforms. Classical algorithms for quantum manybody problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task.more » « less

Quantum manybody systems involving bosonic modes or gauge fields have infinitedimensional local Hilbert spaces which must be truncated to perform simulations of realtime dynamics on classical or quantum computers. To analyze the truncation error, we develop methods for bounding the rate of growth of local quantum numbers such as the occupation number of a mode at a lattice site, or the electric field at a lattice link. Our approach applies to various models of bosons interacting with spins or fermions, and also to both abelian and nonabelian gauge theories. We show that if states in these models are truncated by imposing an upper limit Λ on each local quantum number, and if the initial state has low local quantum numbers, then an error at most ϵ can be achieved by choosing Λ to scale polylogarithmically with ϵ − 1 , an exponential improvement over previous bounds based on energy conservation. For the HubbardHolstein model, we numerically compute a bound on Λ that achieves accuracy ϵ , obtaining significantly improved estimates in various parameter regimes. We also establish a criterion for truncating the Hamiltonian with a provable guarantee on the accuracy of time evolution. Building on that result, we formulate quantum algorithms for dynamical simulation of lattice gauge theories and of models with bosonic modes; the gate complexity depends almost linearly on spacetime volume in the former case, and almost quadratically on time in the latter case. We establish a lower bound showing that there are systems involving bosons for which this quadratic scaling with time cannot be improved. By applying our result on the truncation error in time evolution, we also prove that spectrally isolated energy eigenstates can be approximated with accuracy ϵ by truncating local quantum numbers at Λ = polylog ( ϵ − 1 ) .more » « less

We investigate novel protocols for entanglement purification of qubit Bell pairs. Employing genetic algorithms for the design of the purification circuit, we obtain shorter circuits achieving higher success rates and better final fidelities than what is currently available in the literature. We provide a software tool for analytical and numerical study of the generated purification circuits, under customizable error models. These new purification protocols pave the way to practical implementations of modular quantum computers and quantum repeaters. Our approach is particularly attentive to the effects of finite resources and imperfect local operations  phenomena neglected in the usual asymptotic approach to the problem. The choice of the building blocks permitted in the construction of the circuits is based on a thorough enumeration of the local Clifford operations that act as permutations on the basis of Bell states.more » « less