We study the problem of finding a (pure) product state with optimal fidelity to an unknown n-qubit quantum state ρ, given copies of ρ. This is a basic instance of a fundamental question in quantum learning: is it possible to efficiently learn a simple approximation to an arbitrary state? We give an algorithm which finds a product state with fidelity ε-close to optimal, using N=npoly(1/ε) copies of ρ and poly(N) classical overhead. We further show that estimating the optimal fidelity is NP-hard for error ε=1/poly(n), showing that the error dependence cannot be significantly improved. For our algorithm, we build a carefully-defined cover over candidate product states, qubit by qubit, and then demonstrate that extending the cover can be reduced to approximate constrained polynomial optimization. For our proof of hardness, we give a formal reduction from polynomial optimization to finding the closest product state. Together, these results demonstrate a fundamental connection between these two seemingly unrelated questions. Building on our general approach, we also develop more efficient algorithms in three simpler settings: when the optimal fidelity exceeds 5/6; when we restrict ourselves to a discrete class of product states; and when we are allowed to output a matrix product state.
more »
« less
On the Hardness of Approximating Distances of Quantum Codes
The problem of computing distances of error-correcting codes is fundamental in both the classical and quantum settings. While hardness for the classical version of these problems has been known for some time (in both the exact and approximate settings), it was only recently that Kapshikar and Kundu showed these problems are also hard in the quantum setting. As our first main result, we reprove this using arguably simpler arguments based on hypergraph product codes. In particular, we get a direct reduction to CSS codes, the most commonly used type of quantum code, from the minimum distance problem for classical linear codes. Our second set of results considers the distance of a graph state, which is a key parameter for quantum codes obtained via the codeword stabilized formalism. We show that it is NP-hard to compute/approximate the distance of a graph state when the adjacency matrix of the graph is the input. In fact, we show this is true even if we only consider X-type errors of a graph state. Our techniques moreover imply an interesting classical consequence: the hardness of computing or approximating the distance of classical codes with rate equal to 1/2. One of the main motivations of the present work is a question raised by Kapshikar and Kundu concerning the NP-hardness of approximation when there is an additive error proportional to a quantum code’s length. We show that no such hardness can hold for hypergraph product codes. These observations suggest the possibility of a new kind of square root barrier.
more »
« less
- Award ID(s):
- 2330130
- PAR ID:
- 10658279
- Editor(s):
- Aiswarya, C; Mehta, Ruta; Roy, Subhajit
- Publisher / Repository:
- Schloss Dagstuhl – Leibniz-Zentrum für Informatik
- Date Published:
- Volume:
- 360
- ISSN:
- 1868-8969
- Page Range / eLocation ID:
- 34:1-34:18
- Subject(s) / Keyword(s):
- quantum codes minimum distance problem NP-hardness graph state distance Theory of computation → Error-correcting codes
- Format(s):
- Medium: X Size: 18 pages; 802802 bytes Other: application/pdf
- Size(s):
- 18 pages 802802 bytes
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
A canonical feature of the constraint satisfaction problems in NP is approximation hardness, where in the worst case, finding sufficient-quality approximate solutions is exponentially hard for all known methods. Fundamentally, the lack of any guided local minimum escape method ensures both exact and approximate classical approximation hardness, but the equivalent mechanism(s) for quantum algorithms are poorly understood. For algorithms based on Hamiltonian time evolution, we explore this question through the prototypically hard MAX-3-XORSAT problem class. We conclude that the mechanisms for quantum exact and approximation hardness are fundamentally distinct. We review known results from the literature, and identify mechanisms that make conventional quantum methods (such as Adiabatic Quantum Computing) weak approximation algorithms in the worst case. We construct a family of spectrally filtered quantum algorithms that escape these issues, and develop analytical theories for their performance. We show that, for random hypergraphs in the approximation-hard regime, if we define the energy to be E=Nunsat−Nsat, spectrally filtered quantum optimization will return states with E≤qmEGS (where EGS is the ground state energy) in sub-quadratic time, where conservatively, qm≃0.59. This is in contrast to qm→0 for the hardest instances with classical searches. We test all of these claims with extensive numerical simulations. We do not claim that this approximation guarantee holds for all possible hypergraphs, though our algorithm's mechanism can likely generalize widely. These results suggest that quantum computers are more powerful for approximate optimization than had been previously assumed.more » « less
-
Establishing the complexity of {\em Bounded Distance Decoding} for Reed-Solomon codes is a fundamental open problem in coding theory, explicitly asked by Guruswami and Vardy (IEEE Trans. Inf. Theory, 2005). The problem is motivated by the large current gap between the regime when it is NP-hard, and the regime when it is efficiently solvable (i.e., the Johnson radius). We show the first NP-hardness results for asymptotically smaller decoding radii than the maximum likelihood decoding radius of Guruswami and Vardy. Specifically, for Reed-Solomon codes of length $$N$$ and dimension $K=O(N)$, we show that it is NP-hard to decode more than $$ N-K- c\frac{\log N}{\log\log N}$$ errors (with $c>0$ an absolute constant). Moreover, we show that the problem is NP-hard under quasipolynomial-time reductions for an error amount $$> N-K- c\log{N}$$ (with $c>0$$ an absolute constant). An alternative natural reformulation of the Bounded Distance Decoding problem for Reed-Solomon codes is as a {\em Polynomial Reconstruction} problem. In this view, our results show that it is NP-hard to decide whether there exists a degree $$K$$ polynomial passing through $$K+ c\frac{\log N}{\log\log N}$$ points from a given set of points $$(a_1, b_1), (a_2, b_2)\ldots, (a_N, b_N)$$. Furthermore, it is NP-hard under quasipolynomial-time reductions to decide whether there is a degree $$K$$ polynomial passing through $$K+c\log{N}$$ many points. These results follow from the NP-hardness of a generalization of the classical Subset Sum problem to higher moments, called {\em Moments Subset Sum}, which has been a known open problem, and which may be of independent interest. We further reveal a strong connection with the well-studied Prouhet-Tarry-Escott problem in Number Theory, which turns out to capture a main barrier in extending our techniques. We believe the Prouhet-Tarry-Escott problem deserves further study in the theoretical computer science community.more » « less
-
Meka, Raghu (Ed.)In recent years, quantum computing involving physical systems with continuous degrees of freedom, such as the bosonic quantum states of light, has attracted significant interest. However, a well-defined quantum complexity theory for these bosonic computations over infinite-dimensional Hilbert spaces is missing. In this work, we lay the foundations for such a research program. We introduce natural complexity classes and problems based on bosonic generalizations of BQP, the local Hamiltonian problem, and QMA. We uncover several relationships and subtle differences between standard Boolean classical and discrete-variable quantum complexity classes, and identify outstanding open problems. Our main contributions include the following: 1) Bosonic computations. We show that the power of Gaussian computations up to logspace reductions is equivalent to bounded-error quantum logspace (BQL, characterized by the problem of inverting well-conditioned matrices). More generally, we define classes of continuous-variable quantum polynomial time computations with a bounded probability of error (CVBQP) based on gates generated by polynomial bosonic Hamiltonians and particle-number measurements. Due to the infinite-dimensional Hilbert space, it is not a priori clear whether a decidable upper bound can be obtained for these classes. We identify complete problems for these classes, and we demonstrate a BQP lower bound and an EXPSPACE upper bound by proving bounds on the average energy throughout the computation. We further show that the problem of computing expectation values of polynomial bosonic observables at the output of bosonic quantum circuits using Gaussian and cubic phase gates is in PSPACE. 2) Bosonic ground energy problems. We prove that the problem of deciding whether the spectrum of a bosonic Hamiltonian is bounded from below is co-NP-hard. Furthermore, we show that the problem of finding the minimum energy of a bosonic Hamiltonian critically depends on the non-Gaussian stellar rank of the family of energy-constrained states one optimizes over: for zero stellar rank, i.e., optimizing over Gaussian states, it is NP-complete; for polynomially-bounded stellar rank, it is in QMA; for unbounded stellar rank, it is RE-hard, i.e., undecidable.more » « less
-
Abstract Quantum Approximate Optimization algorithm (QAOA) aims to search for approximate solutions to discrete optimization problems with near-term quantum computers. As there are no algorithmic guarantee possible for QAOA to outperform classical computers, without a proof that bounded-error quantum polynomial time (BQP) ≠ nondeterministic polynomial time (NP), it is necessary to investigate the empirical advantages of QAOA. We identify a computational phase transition of QAOA when solving hard problems such as SAT—random instances are most difficult to train at a critical problem density. We connect the transition to the controllability and the complexity of QAOA circuits. Moreover, we find that the critical problem density in general deviates from the SAT-UNSAT phase transition, where the hardest instances for classical algorithms lies. Then, we show that the high problem density region, which limits QAOA’s performance in hard optimization problems (reachability deficits), is actually a good place to utilize QAOA: its approximation ratio has a much slower decay with the problem density, compared to classical approximate algorithms. Indeed, it is exactly in this region that quantum advantages of QAOA over classical approximate algorithms can be identified.more » « less
An official website of the United States government

