Abstract Estimating expectation values is a key subroutine in quantum algorithms. Near-term implementations face two major challenges: a limited number of samples required to learn a large collection of observables, and the accumulation of errors in devices without quantum error correction. To address these challenges simultaneously, we develop a quantum error-mitigation strategy calledsymmetry-adjusted classical shadows, by adjusting classical-shadow tomography according to how symmetries are corrupted by device errors. As a concrete example, we highlight global U(1) symmetry, which manifests in fermions as particle number and in spins as total magnetization, and illustrate their group-theoretic unification with respective classical-shadow protocols. We establish rigorous sampling bounds under readout errors obeying minimal assumptions, and perform numerical experiments with a more comprehensive model of gate-level errors derived from existing quantum processors. Our results reveal symmetry-adjusted classical shadows as a low-cost strategy to mitigate errors from noisy quantum experiments in the ubiquitous presence of symmetry.
more »
« less
Precision Bounds on Continuous-Variable State Tomography Using Classical Shadows
Shadow tomography is a framework for constructing succinct descriptions of quantum states using randomized measurement bases, called “classical shadows,” with powerful methods to bound the estimators used. We recast existing experimental protocols for continuous-variable quantum state tomography in the classical-shadow framework, obtaining rigorous bounds on the number of independent measurements needed for estimating density matrices from these protocols. We analyze the efficiency of homodyne, heterodyne, photon-number-resolving, and photon-parity protocols. To reach a desired precision on the classical shadow of an N-photon density matrix with high probability, we show that homodyne detection requires order O(N4+1/3) measurements in the worst case, whereas photon-number-resolving and photon-parity detection require O(N4) measurements in the worst case (both up to logarithmic corrections). We benchmark these results against numerical simulation as well as experimental data from optical homodyne experiments. We find that numerical and experimental analyses of homodyne tomography match closely with our theoretical predictions. We extend our single-mode results to an efficient construction of multimode shadows based on local measurements.
more »
« less
- Award ID(s):
- 2120757
- PAR ID:
- 10505826
- Publisher / Repository:
- American Physical Society
- Date Published:
- Journal Name:
- PRX Quantum 5
- Volume:
- 5
- Issue:
- 1
- ISSN:
- 2691-3399
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Classical shadows (CS) offer a resource-efficient means to estimate quantum observables, circumventing the need for exhaustive state tomography. Here, we clarify and explore the connection between CS techniques and least squares (LS) and regularized least squares (RLS) methods commonly used in machine learning and data analysis. By formal identification of LS and RLS ``shadows'' completely analogous to those in CS---namely, point estimators calculated from the empirical frequencies of single measurements---we show that both RLS and CS can be viewed as regularizers for the underdetermined regime, replacing the pseudoinverse with invertible alternatives. Through numerical simulations, we evaluate RLS and CS from three distinct angles: the tradeoff in bias and variance, mismatch between the expected and actual measurement distributions, and the interplay between the number of measurements and number of shots per measurement.Compared to CS, RLS attains lower variance at the expense of bias, is robust to distribution mismatch, and is more sensitive to the number of shots for a fixed number of state copies---differences that can be understood from the distinct approaches taken to regularization. Conceptually, our integration of LS, RLS, and CS under a unifying ``shadow'' umbrella aids in advancing the overall picture of CS techniques, while practically our results highlight the tradeoffs intrinsic to these measurement approaches, illuminating the circumstances under which either RLS or CS would be preferred, such as unverified randomness for the former or unbiased estimation for the latter.more » « less
-
The authors provide the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space. Specifically, they present a protocol that, given any 𝑚 ∈ 𝑁 m∈N and 𝜖 ≤ 𝑂 ( 𝑑 − 1 / 2 ) ϵ≤O(d −1/2 ), measures 𝑂 ( log ( 𝑚 ) / 𝜖 2 ) O(log(m)/ϵ 2 ) copies of an unknown mixed state 𝜌 ∈ 𝐶 𝑑 × 𝑑 ρ∈C d×d and outputs a classical description of 𝜌 ρ. This description can then be used to estimate any collection of 𝑚 m observables to within additive accuracy 𝜖 ϵ. Previously, even for the simpler case of shadow tomography where observables are known in advance, the best known rates either scaled benignly but suboptimally in all of 𝑚 , 𝑑 , 𝜖 m,d,ϵ, or scaled optimally in 𝜖 , 𝑚 ϵ,m but included additional polynomial factors in 𝑑 d. Interestingly, the authors also show via dimensionality reduction that one can rescale 𝜖 ϵ and 𝑑 d to reduce to the regime where 𝜖 ≤ 𝑂 ( 𝑑 − 1 / 2 ) ϵ≤O(d −1/2 ). Their algorithm draws on representation-theoretic tools developed in the context of full state tomography.more » « less
-
Homodyne detection is a common self-referenced technique to extract optical quadratures. Due to ubiquitous fluctuations, experiments measuring optical quadratures require homodyne angle control. Current homodyne angle locking techniques only provide high quality error signals in a span significantly smaller thanπradians, the span required for full state tomography, leading to inevitable discontinuities during full tomography. Here, we present and demonstrate a locking technique using a universally tunable modulator which produces high quality error signals at an arbitrary homodyne angle. Our work enables continuous full-state tomography and paves the way to backaction evasion protocols based on a time-varying homodyne angle.more » « less
-
Servedio, Rocco (Ed.)We study the complexity of lattice problems in a world where algorithms, reductions, and protocols can run in superpolynomial time, revisiting four foundational results: two worst-case to average-case reductions and two protocols. We also show a novel protocol. 1. We prove that secret-key cryptography exists if O˜(n‾√)-approximate SVP is hard for 2εn-time algorithms. I.e., we extend to our setting (Micciancio and Regev's improved version of) Ajtai's celebrated polynomial-time worst-case to average-case reduction from O˜(n)-approximate SVP to SIS. 2. We prove that public-key cryptography exists if O˜(n)-approximate SVP is hard for 2εn-time algorithms. This extends to our setting Regev's celebrated polynomial-time worst-case to average-case reduction from O˜(n1.5)-approximate SVP to LWE. In fact, Regev's reduction is quantum, but ours is classical, generalizing Peikert's polynomial-time classical reduction from O˜(n2)-approximate SVP. 3. We show a 2εn-time coAM protocol for O(1)-approximate CVP, generalizing the celebrated polynomial-time protocol for O(n/logn‾‾‾‾‾‾‾√)-CVP due to Goldreich and Goldwasser. These results show complexity-theoretic barriers to extending the recent line of fine-grained hardness results for CVP and SVP to larger approximation factors. (This result also extends to arbitrary norms.) 4. We show a 2εn-time co-non-deterministic protocol for O(logn‾‾‾‾‾√)-approximate SVP, generalizing the (also celebrated!) polynomial-time protocol for O(n‾√)-CVP due to Aharonov and Regev. 5. We give a novel coMA protocol for O(1)-approximate CVP with a 2εn-time verifier. All of the results described above are special cases of more general theorems that achieve time-approximation factor tradeoffs.more » « less
An official website of the United States government

