skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Optimal high-precision shadow estimation
The authors provide the first tight sample complexity bounds for shadow tomography and classical shadows in the regime where the target error is below some sufficiently small inverse polynomial in the dimension of the Hilbert space. Specifically, they present a protocol that, given any 𝑚 ∈ 𝑁 m∈N and 𝜖 ≤ 𝑂 ( 𝑑 − 1 / 2 ) ϵ≤O(d −1/2 ), measures 𝑂 ( log ⁡ ( 𝑚 ) / 𝜖 2 ) O(log(m)/ϵ 2 ) copies of an unknown mixed state 𝜌 ∈ 𝐶 𝑑 × 𝑑 ρ∈C d×d and outputs a classical description of 𝜌 ρ. This description can then be used to estimate any collection of 𝑚 m observables to within additive accuracy 𝜖 ϵ. Previously, even for the simpler case of shadow tomography where observables are known in advance, the best known rates either scaled benignly but suboptimally in all of 𝑚 , 𝑑 , 𝜖 m,d,ϵ, or scaled optimally in 𝜖 , 𝑚 ϵ,m but included additional polynomial factors in 𝑑 d. Interestingly, the authors also show via dimensionality reduction that one can rescale 𝜖 ϵ and 𝑑 d to reduce to the regime where 𝜖 ≤ 𝑂 ( 𝑑 − 1 / 2 ) ϵ≤O(d −1/2 ). Their algorithm draws on representation-theoretic tools developed in the context of full state tomography.  more » « less
Award ID(s):
2505865
PAR ID:
10631918
Author(s) / Creator(s):
; ;
Publisher / Repository:
https://doi.org/10.48550/arXiv.2407.13874
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The authors present an algorithm that, given an n-vertex m-edge Eulerian graph with polynomially bounded weights, computes an 𝑂 ~ ( 𝑛 log ⁡ 2 𝑛 ⋅ 𝜀 − 2 ) O ~ (nlog 2 n⋅ε −2 )-edge 𝜀 ε-approximate Eulerian sparsifier with high probability in 𝑂 ~ ( 𝑚 log ⁡ 3 𝑛 ) O ~ (mlog 3 n) time (where 𝑂 ~ ( ⋅ ) O ~ (⋅) hides polyloglog(n) factors). By a reduction from Peng-Song (STOC ’22), this yields an 𝑂 ~ ( 𝑚 log ⁡ 3 𝑛 + 𝑛 log ⁡ 6 𝑛 ) O ~ (mlog 3 n+nlog 6 n)-time algorithm for solving n-vertex m-edge Eulerian Laplacian systems with polynomially bounded weights with high probability, improving on the previous state-of-the-art runtime of Ω ( 𝑚 log ⁡ 8 𝑛 + 𝑛 log ⁡ 23 𝑛 ) Ω(mlog 8 n+nlog 23 n). They also provide a polynomial-time algorithm that computes sparsifiers with 𝑂 ( min ⁡ ( 𝑛 log ⁡ 𝑛 ⋅ 𝜀 − 2 + 𝑛 log ⁡ 5 / 3 𝑛 ⋅ 𝜀 − 4 / 3 , 𝑛 log ⁡ 3 / 2 𝑛 ⋅ 𝜀 − 2 ) ) O(min(nlogn⋅ε −2 +nlog 5/3 n⋅ε −4/3 ,nlog 3/2 n⋅ε −2 )) edges, improving the previous best bounds. Furthermore, they extend their techniques to yield the first 𝑂 ( 𝑚 ⋅ polylog ( 𝑛 ) ) O(m⋅polylog(n))-time algorithm for computing 𝑂 ( 𝑛 𝜀 − 1 ⋅ polylog ( 𝑛 ) ) O(nε −1 ⋅polylog(n))-edge graphical spectral sketches, along with a natural Eulerian generalization. Unlike prior approaches using short cycle or expander decompositions, their algorithms leverage a new effective resistance decomposition scheme, combined with natural sampling and electrical routing for degree balance. The analysis applies asymmetric variance bounds specialized to Eulerian Laplacians and tools from discrepancy theory. 
    more » « less
  2. Traditional models of supervised learning require a learner, given examples from an arbitrary joint distribution on 𝑅 𝑑 × { ± 1 } R d ×{±1}, to output a hypothesis that competes (to within 𝜖 ϵ) with the best fitting concept from a class. To overcome hardness results for learning even simple concept classes, this paper introduces a smoothed-analysis framework that only requires competition with the best classifier robust to small random Gaussian perturbations. This subtle shift enables a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (multi-index model) and (2) has bounded Gaussian surface area. This class includes functions of halfspaces and low-dimensional convex sets, which are only known to be learnable in non-smoothed settings with respect to highly structured distributions like Gaussians. The analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, the authors present the first algorithm for agnostically learning intersections of 𝑘 k-halfspaces in time 𝑘 ⋅ poly ( log ⁡ 𝑘 , 𝜖 , 𝛾 ) k⋅poly(logk,ϵ,γ), where 𝛾 γ is the margin parameter. Previously, the best-known runtime was exponential in 𝑘 k (Arriaga and Vempala, 1999). 
    more » « less
  3. This paper studies differentially private stochastic convex optimization (DP-SCO) in the presence of heavy-tailed gradients, where only a 𝑘 kth-moment bound on sample Lipschitz constants is assumed, instead of a uniform bound. The authors propose a reduction-based approach that achieves the first near-optimal error rates (up to logarithmic factors) in this setting. Specifically, under ( 𝜖 , 𝛿 ) (ϵ,δ)-approximate differential privacy, they achieve an error bound of 𝐺 2 𝑛 + 𝐺 𝑘 ⋅ ( 𝑑 𝑛 𝜖 ) 1 − 1 𝑘 , n ​ G 2 ​ ​ +G k ​ ⋅( nϵ d ​ ​ ) 1− k 1 ​ , up to a mild polylogarithmic factor in 1 𝛿 δ 1 ​ , where 𝐺 2 G 2 ​ and 𝐺 𝑘 G k ​ are the 2nd and 𝑘 kth moment bounds on sample Lipschitz constants. This nearly matches the lower bound established by Lowy and Razaviyayn (2023). Beyond the basic result, the authors introduce a suite of private algorithms that further improve performance under additional assumptions: an optimal algorithm under a known-Lipschitz constant, a near-linear time algorithm for smooth functions, and an optimal linear-time algorithm for smooth generalized linear models. 
    more » « less
  4. We investigate the problem of predicting the output behavior of unknown quantum channels. Given query access to an n-qubit channel E and an observable O, we aim to learn the mapping ρ↦Tr(OE[ρ]) to within a small error for most ρ sampled from a distribution D. Previously, Huang, Chen, and Preskill proved a surprising result that even if E is arbitrary, this task can be solved in time roughly nO(log(1/ϵ)), where ϵ is the target prediction error. However, their guarantee applied only to input distributions D invariant under all single-qubit Clifford gates, and their algorithm fails for important cases such as general product distributions over product states ρ. In this work, we propose a new approach that achieves accurate prediction over essentially any product distribution D, provided it is not "classical" in which case there is a trivial exponential lower bound. Our method employs a "biased Pauli analysis," analogous to classical biased Fourier analysis. Implementing this approach requires overcoming several challenges unique to the quantum setting, including the lack of a basis with appropriate orthogonality properties. The techniques we develop to address these issues may have broader applications in quantum information. 
    more » « less
  5. Given a set P of n weighted points and a set S of m disks in the plane, the hitting set problem is to compute a subset 𝑃′ of points of P such that each disk contains at least one point of 𝑃′ and the total weight of all points of 𝑃′ is minimized. The problem is known to be NP-hard. In this paper, we consider a line-constrained version of the problem in which all disks are centered on a line ℓ. We present an 𝑂((𝑚+𝑛)log(𝑚+𝑛)+𝜅log𝑚) time algorithm for the problem, where 𝜅 is the number of pairs of disks that intersect. For the unit-disk case where all disks have the same radius, the running time can be reduced to 𝑂((𝑛+𝑚)log(𝑚+𝑛)). In addition, we solve the problem in 𝑂((𝑚+𝑛)log(𝑚+𝑛)) time in the 𝐿∞ and 𝐿1 metrics, in which a disk is a square and a diamond, respectively. 
    more » « less