Shadow tomography is a framework for constructing succinct descriptions of quantum states using randomized measurement bases, called “classical shadows,” with powerful methods to bound the estimators used. We recast existing experimental protocols for continuous-variable quantum state tomography in the classical-shadow framework, obtaining rigorous bounds on the number of independent measurements needed for estimating density matrices from these protocols. We analyze the efficiency of homodyne, heterodyne, photon-number-resolving, and photon-parity protocols. To reach a desired precision on the classical shadow of an N-photon density matrix with high probability, we show that homodyne detection requires order O(N4+1/3) measurements in the worst case, whereas photon-number-resolving and photon-parity detection require O(N4) measurements in the worst case (both up to logarithmic corrections). We benchmark these results against numerical simulation as well as experimental data from optical homodyne experiments. We find that numerical and experimental analyses of homodyne tomography match closely with our theoretical predictions. We extend our single-mode results to an efficient construction of multimode shadows based on local measurements. 
                        more » 
                        « less   
                    
                            
                            Group-theoretic error mitigation enabled by classical shadows and symmetries
                        
                    
    
            Abstract Estimating expectation values is a key subroutine in quantum algorithms. Near-term implementations face two major challenges: a limited number of samples required to learn a large collection of observables, and the accumulation of errors in devices without quantum error correction. To address these challenges simultaneously, we develop a quantum error-mitigation strategy calledsymmetry-adjusted classical shadows, by adjusting classical-shadow tomography according to how symmetries are corrupted by device errors. As a concrete example, we highlight global U(1) symmetry, which manifests in fermions as particle number and in spins as total magnetization, and illustrate their group-theoretic unification with respective classical-shadow protocols. We establish rigorous sampling bounds under readout errors obeying minimal assumptions, and perform numerical experiments with a more comprehensive model of gate-level errors derived from existing quantum processors. Our results reveal symmetry-adjusted classical shadows as a low-cost strategy to mitigate errors from noisy quantum experiments in the ubiquitous presence of symmetry. 
        more » 
        « less   
        
    
    
                            - PAR ID:
- 10513168
- Publisher / Repository:
- Nature Publishing Group
- Date Published:
- Journal Name:
- npj Quantum Information
- Volume:
- 10
- Issue:
- 1
- ISSN:
- 2056-6387
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract We present estimates for the number of shadow-resolved supermassive black hole (SMBH) systems that can be detected using radio interferometers, as a function of angular resolution, flux density sensitivity, and observing frequency. Accounting for the distribution of SMBHs across mass, redshift, and accretion rate, we use a new semianalytic spectral energy distribution model to derive the number of SMBHs with detectable and optically thin horizon-scale emission. We demonstrate that (sub)millimeter interferometric observations with ∼0.1μas resolution and ∼1μJy sensitivity could access >106SMBH shadows. We then further decompose the shadow source counts into the number of black holes for which we could expect to observe the first- and second-order lensed photon rings. Accessing the bulk population of first-order photon rings requires ≲2μas resolution and ≲0.5 mJy sensitivity, whereas doing the same for second-order photon rings requires ≲0.1μas resolution and ≲5μJy sensitivity. Our model predicts that with modest improvements to sensitivity, as many as ∼5 additional horizon-resolved sources should become accessible to the current Event Horizon Telescope (EHT), whereas a next-generation EHT observing at 345 GHz should have access to ∼3 times as many sources. More generally, our results can help guide enhancements of current arrays and specifications for future interferometric experiments that aim to spatially resolve a large population of SMBH shadows or higher-order photon rings.more » « less
- 
            Abstract We report better-than-classical success probabilities for a complete Grover quantum search algorithm on the largest scale demonstrated to date, of up to five qubits, using two different IBM platforms. This is enabled by error suppression via robust dynamical decoupling. Further improvements arise after the use of measurement error mitigation, but the latter is insufficient by itself for achieving better-than-classical performance. For two qubits, we demonstrate a 99.5% success probability via the use of the [[4, 2, 2]] quantum error-detection (QED) code. This constitutes a demonstration of quantum algorithmic breakeven via QED. Along the way, we introducealgorithmic error tomography(AET), a method that provides a holistic view of the errors accumulated throughout an entire quantum algorithm, filtered via the errors detected by the QED code used to encode the circuit. We demonstrate that AET provides a stringent test of an error model based on a combination of amplitude damping, dephasing, and depolarization.more » « less
- 
            Classical shadows (CS) offer a resource-efficient means to estimate quantum observables, circumventing the need for exhaustive state tomography. Here, we clarify and explore the connection between CS techniques and least squares (LS) and regularized least squares (RLS) methods commonly used in machine learning and data analysis. By formal identification of LS and RLS ``shadows'' completely analogous to those in CS---namely, point estimators calculated from the empirical frequencies of single measurements---we show that both RLS and CS can be viewed as regularizers for the underdetermined regime, replacing the pseudoinverse with invertible alternatives. Through numerical simulations, we evaluate RLS and CS from three distinct angles: the tradeoff in bias and variance, mismatch between the expected and actual measurement distributions, and the interplay between the number of measurements and number of shots per measurement.Compared to CS, RLS attains lower variance at the expense of bias, is robust to distribution mismatch, and is more sensitive to the number of shots for a fixed number of state copies---differences that can be understood from the distinct approaches taken to regularization. Conceptually, our integration of LS, RLS, and CS under a unifying ``shadow'' umbrella aids in advancing the overall picture of CS techniques, while practically our results highlight the tradeoffs intrinsic to these measurement approaches, illuminating the circumstances under which either RLS or CS would be preferred, such as unverified randomness for the former or unbiased estimation for the latter.more » « less
- 
            Abstract Practical quantum computing will require error rates well below those achievable with physical qubits. Quantum error correction1,2offers a path to algorithmically relevant error rates by encoding logical qubits within many physical qubits, for which increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low for logical performance to improve with increasing code size. Here we report the measurement of logical qubit performance scaling across several code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find that our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, in terms of both logical error probability over 25 cycles and logical error per cycle ((2.914 ± 0.016)% compared to (3.028 ± 0.023)%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6logical error per cycle floor set by a single high-energy event (1.6 × 10−7excluding this event). We accurately model our experiment, extracting error budgets that highlight the biggest challenges for future systems. These results mark an experimental demonstration in which quantum error correction begins to improve performance with increasing qubit number, illuminating the path to reaching the logical error rates required for computation.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
