skip to main content


Title: Optimally Sparse Representations of Cartoon-Like Cylindrical Data
Sparse representations of multidimensional data have received a significant attention in the literature due to their applications in problems of data restoration and feature extraction. In this paper, we consider an idealized class C2(Z)⊂L2(R3) of 3-dimensional data dominated by surface singularities that are orthogonal to the xy plane. To deal with this type of data, we introduce a new multiscale directional representation called cylindrical shearlets and prove that this new approach achieves superior approximation properties not only with respect to conventional multiscale representations but also with respect to 3-dimensional shearlets and curvelets. Specifically, the N-term approximation fSN obtained by selecting the N largest coefficients of the cylindrical shearlet expansion of a function f∈C(Z) satisfies the asymptotic estimate ∥f−fSN∥22≤cN−2(lnN)3,as N→∞. This is the optimal decay rate, up the logarithmic factor, outperforming 3d wavelet and 3d shearlet approximations which only yield approximation rates of order N−1/2 and N−1 (ignoring logarithmic factors), respectively, on the same type of data.  more » « less
Award ID(s):
1720452
NSF-PAR ID:
10276915
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
The Journal of Geometric Analysis
ISSN:
1050-6926
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We give two new quantum algorithms for solving semidefinite programs (SDPs) providing quantum speed-ups. We consider SDP instances with m constraint matrices, each of dimension n, rank at most r, and sparsity s. The first algorithm assumes an input model where one is given access to an oracle to the entries of the matrices at unit cost. We show that it has run time O~(s^2 (sqrt{m} epsilon^{-10} + sqrt{n} epsilon^{-12})), with epsilon the error of the solution. This gives an optimal dependence in terms of m, n and quadratic improvement over previous quantum algorithms (when m ~~ n). The second algorithm assumes a fully quantum input model in which the input matrices are given as quantum states. We show that its run time is O~(sqrt{m}+poly(r))*poly(log m,log n,B,epsilon^{-1}), with B an upper bound on the trace-norm of all input matrices. In particular the complexity depends only polylogarithmically in n and polynomially in r. We apply the second SDP solver to learn a good description of a quantum state with respect to a set of measurements: Given m measurements and a supply of copies of an unknown state rho with rank at most r, we show we can find in time sqrt{m}*poly(log m,log n,r,epsilon^{-1}) a description of the state as a quantum circuit preparing a density matrix which has the same expectation values as rho on the m measurements, up to error epsilon. The density matrix obtained is an approximation to the maximum entropy state consistent with the measurement data considered in Jaynes' principle from statistical mechanics. As in previous work, we obtain our algorithm by "quantizing" classical SDP solvers based on the matrix multiplicative weight update method. One of our main technical contributions is a quantum Gibbs state sampler for low-rank Hamiltonians, given quantum states encoding these Hamiltonians, with a poly-logarithmic dependence on its dimension, which is based on ideas developed in quantum principal component analysis. We also develop a "fast" quantum OR lemma with a quadratic improvement in gate complexity over the construction of Harrow et al. [Harrow et al., 2017]. We believe both techniques might be of independent interest. 
    more » « less
  2. Abstract

    We study the distribution over measurement outcomes of noisy random quantum circuits in the regime of low fidelity, which corresponds to the setting where the computation experiences at least one gate-level error with probability close to one. We model noise by adding a pair of weak, unital, single-qubit noise channels after each two-qubit gate, and we show that for typical random circuit instances, correlations between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the corresponding noiseless output distribution$$p_{\text {ideal}}$$pidealshrink exponentially with the expected number of gate-level errors. Specifically, the linear cross-entropy benchmarkFthat measures this correlation behaves as$$F=\text {exp}(-2s\epsilon \pm O(s\epsilon ^2))$$F=exp(-2sϵ±O(sϵ2)), where$$\epsilon $$ϵis the probability of error per circuit location andsis the number of two-qubit gates. Furthermore, if the noise is incoherent—for example, depolarizing or dephasing noise—the total variation distance between the noisy output distribution$$p_{\text {noisy}}$$pnoisyand the uniform distribution$$p_{\text {unif}}$$punifdecays at precisely the same rate. Consequently, the noisy output distribution can be approximated as$$p_{\text {noisy}}\approx Fp_{\text {ideal}}+ (1-F)p_{\text {unif}}$$pnoisyFpideal+(1-F)punif. In other words, although at least one local error occurs with probability$$1-F$$1-F, the errors are scrambled by the random quantum circuit and can be treated as global white noise, contributing completely uniform output. Importantly, we upper bound the average total variation error in this approximation by$$O(F\epsilon \sqrt{s})$$O(Fϵs). Thus, the “white-noise approximation” is meaningful when$$\epsilon \sqrt{s} \ll 1$$ϵs1, a quadratically weaker condition than the$$\epsilon s\ll 1$$ϵs1requirement to maintain high fidelity. The bound applies if the circuit size satisfies$$s \ge \Omega (n\log (n))$$sΩ(nlog(n)), which corresponds to onlylogarithmic depthcircuits, and if, additionally, the inverse error rate satisfies$$\epsilon ^{-1} \ge {\tilde{\Omega }}(n)$$ϵ-1Ω~(n), which is needed to ensure errors are scrambled faster thanFdecays. The white-noise approximation is useful for salvaging the signal from a noisy quantum computation; for example, it was an underlying assumption in complexity-theoretic arguments that noisy random quantum circuits cannot be efficiently sampled classically, even when the fidelity is low. Our method is based on a map from second-moment quantities in random quantum circuits to expectation values of certain stochastic processes for which we compute upper and lower bounds.

     
    more » « less
  3. In this paper, we consider two fundamental cut approximation problems on large graphs. We prove new lower bounds for both problems that are optimal up to logarithmic factors. The first problem is approximating cuts in balanced directed graphs, where the goal is to build a data structure to provide a $(1 \pm \epsilon)$-estimation of the cut values of a graph on $n$ vertices. For this problem, there are tight bounds for undirected graphs, but for directed graphs, such a data structure requires $\Omega(n^2)$ bits even for constant $\epsilon$. To cope with this, recent works consider $\beta$-balanced graphs, meaning that for every directed cut, the total weight of edges in one direction is at most $\beta$ times the total weight in the other direction. We consider the for-each model, where the goal is to approximate a fixed cut with high probability, and the for-all model, where the data structure must simultaneously preserve all cuts. We improve the previous $\Omega(n \sqrt{\beta/\epsilon})$ lower bound in the for-each model to $\tilde\Omega(n \sqrt{\beta}/\epsilon)$ and we improve the previous $\Omega(n \beta/\epsilon)$ lower bound in the for-all model to $\Omega(n \beta/\epsilon^2)$. This resolves the main open questions of (Cen et al., ICALP, 2021). The second problem is approximating the global minimum cut in the local query model where we can only access the graph through degree, edge, and adjacency queries. We prove an $\Omega(\min\{m, \frac{m}{\epsilon^2 k}\})$ lower bound for this problem, which improves the previous $\Omega(\frac{m}{k})$ lower bound, where $m$ is the number of edges of the graph, $k$ is the minimum cut size, and we seek a $(1+\epsilon)$-approximation. In addition, we observe that existing upper bounds with minor modifications match our lower bound up to logarithmic factors. 
    more » « less
  4. We prove in generic situations that the lattice in a tame type induced by the completed cohomology of a $U(3)$ -arithmetic manifold is purely local, that is, only depends on the Galois representation at places above $p$ . This is a generalization to $\text{GL}_{3}$ of the lattice conjecture of Breuil. In the process, we also prove the geometric Breuil–Mézard conjecture for (tamely) potentially crystalline deformation rings with Hodge–Tate weights $(2,1,0)$ as well as the Serre weight conjectures of Herzig [‘The weight in a Serre-type conjecture for tame $n$ -dimensional Galois representations’, Duke Math. J.   149 (1) (2009), 37–116] over an unramified field extending the results of Le et al. [‘Potentially crystalline deformation 3985 rings and Serre weight conjectures: shapes and shadows’, Invent. Math.   212 (1) (2018), 1–107]. We also prove results in modular representation theory about lattices in Deligne–Lusztig representations for the group $\text{GL}_{3}(\mathbb{F}_{q})$ . 
    more » « less
  5. Nonasymptotic bounds for Gaussian and bootstrap approximation have recently attracted significant interest in high-dimensional statistics. This paper studies Berry–Esseen bounds for such approximations with respect to the multivariate Kolmogorov distance, in the context of a sum of n random vectors that are p-dimensional and i.i.d. Up to now, a growing line of work has established bounds with mild logarithmic dependence on p. However, the problem of developing corresponding bounds with near n^{−1/2} dependence on n has remained largely unresolved. Within the setting of random vectors that have sub-Gaussian or subexponential entries, this paper establishes bounds with near n^{−1/2} dependence, for both Gaussian and bootstrap approximation. In addition, the proofs are considerably distinct from other recent approaches, and make use of an “implicit smoothing” operation in the Lindeberg interpolation. 
    more » « less