Symmetry is a unifying concept in physics. In quantum information and beyond, it is known that quantum states possessing symmetry are not useful for certain information-processing tasks. For example, states that commute with a Hamiltonian realizing a time evolution are not useful for timekeeping during that evolution, and bipartite states that are highly extendible are not strongly entangled and thus not useful for basic tasks like teleportation. Motivated by this perspective, this paper details several quantum algorithms that test the symmetry of quantum states and channels. For the case of testing Bose symmetry of a state, we show that there is a simple and efficient quantum algorithm, while the tests for other kinds of symmetry rely on the aid of a quantum prover. We prove that the acceptance probability of each algorithm is equal to the maximum symmetric fidelity of the state being tested, thus giving a firm operational meaning to these latter resource quantifiers. Special cases of the algorithms test for incoherence or separability of quantum states. We evaluate the performance of these algorithms on choice examples by using the variational approach to quantum algorithms, replacing the quantum prover with a parameterized circuit. We demonstrate this approach for numerous examples using the IBM quantum noiseless and noisy simulators, and we observe that the algorithms perform well in the noiseless case and exhibit noise resilience in the noisy case. We also show that the maximum symmetric fidelities can be calculated by semi-definite programs, which is useful for benchmarking the performance of these algorithms for sufficiently small examples. Finally, we establish various generalizations of the resource theory of asymmetry, with the upshot being that the acceptance probabilities of the algorithms are resource monotones and thus well motivated from the resource-theoretic perspective.
more »
« less
Cycle index polynomials and generalized quantum separability tests
The mixedness of one share of a pure bipartite state determines whether the overall state is a separable, unentangled one. Here we consider quantum computational tests of mixedness, and we derive an exact expression of the acceptance probability of such tests as the number of copies of the state becomes larger. We prove that the analytical form of this expression is given by the cycle index polynomial of the symmetric group S k , which is itself related to the Bell polynomials. After doing so, we derive a family of quantum separability tests, each of which is generated by a finite group; for all such algorithms, we show that the acceptance probability is determined by the cycle index polynomial of the group. Finally, we produce and analyse explicit circuit constructions for these tests, showing that the tests corresponding to the symmetric and cyclic groups can be executed with O ( k 2 ) and O ( k log ( k ) ) controlled-SWAP gates, respectively, where k is the number of copies of the state being tested.
more »
« less
- Award ID(s):
- 2315398
- PAR ID:
- 10429117
- Date Published:
- Journal Name:
- Proceedings of the Royal Society A: Mathematical, Physical and Engineering Sciences
- Volume:
- 479
- Issue:
- 2274
- ISSN:
- 1364-5021
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract In this paper, we consider the problem of noiseless non-adaptive probabilistic group testing, in which the goal is high-probability recovery of the defective set. We show that in the case of $$n$$ items among which $$k$$ are defective, the smallest possible number of tests equals $$\min \{ C_{k,n} k \log n, n\}$$ up to lower-order asymptotic terms, where $$C_{k,n}$$ is a uniformly bounded constant (varying depending on the scaling of $$k$$ with respect to $$n$$) with a simple explicit expression. The algorithmic upper bound follows from a minor adaptation of an existing analysis of the Definite Defectives algorithm, and the algorithm-independent lower bound builds on existing works for the regimes $$k \le n^{1-\varOmega (1)}$$ and $$k = \varTheta (n)$$. In sufficiently sparse regimes (including $$k = o\big ( \frac{n}{\log n} \big )$$), our main result generalizes that of Coja-Oghlan et al. (2020) by avoiding the assumption $$k \le n^{1-\varOmega (1)}$$, whereas in sufficiently dense regimes (including $$k = \omega \big ( \frac{n}{\log n} \big )$$), our main result shows that individual testing is asymptotically optimal for any non-zero target success probability, thus strengthening an existing result of Aldridge (2019, IEEE Trans. Inf. Theory, 65, 2058–2061) in terms of both the error probability and the assumed scaling of $$k$$.more » « less
-
Tauman_Kalai, Yael (Ed.)A folklore conjecture in quantum computing is that the acceptance probability of a quantum query algorithm can be approximated by a classical decision tree, with only a polynomial increase in the number of queries. Motivated by this conjecture, Aaronson and Ambainis (Theory of Computing, 2014) conjectured that this should hold more generally for any bounded function computed by a low degree polynomial. In this work we prove two new results towards establishing this conjecture: first, that any such polynomial has a small fractional certificate complexity; and second, that many inputs have a small sensitive block. We show that these would imply the Aaronson and Ambainis conjecture, assuming a conjectured extension of Talagrand’s concentration inequality. On the technical side, many classical techniques used in the analysis of Boolean functions seem to fail when applied to bounded functions. Here, we develop a new technique, based on a mix of combinatorics, analysis and geometry, and which in part extends a recent technique of Knop et al. (STOC 2021) to bounded functions.more » « less
-
Stefano Leonardi and Anupam Gupta (Ed.)A probabilistic algorithm A is pseudodeterministic if, on every input, there exists a canonical value that is output with high probability. If the algorithm outputs one of k canonical values with high probability, then it is called a k-pseudodeterministic algorithm. In the study of pseudodeterminism, the Acceptance Probability Estimation Problem (APEP), which is to additively approximate the acceptance probability of a Boolean circuit, is emerging as a central computational problem. This problem admits a 2-pseudodeterministic algorithm. Recently, it was shown that a pseudodeterministic algorithm for this problem would imply that any multi-valued function that admits a k-pseudodeterministic algorithm for a constant k (including approximation algorithms) also admits a pseudodeterministic algorithm (Dixon, Pavan, Vinodchandran; ITCS 2021). The contribution of the present work is two-fold. First, as our main conceptual contribution, we establish that the existence of a pseudodeterministic algorithm for APEP is fundamentally related to the gap between probabilistic promise classes and the corresponding standard complexity classes. In particular, we show the following equivalence: APEP has a pseudodeterministic approximation algorithm if and only if every promise problem in PromiseBPP has a solution in BPP. A conceptual interpretation of this equivalence is that the algorithmic gap between 2-pseudodeterminism and pseudodeterminism is equivalent to the gap between PromiseBPP and BPP. Based on this connection, we show that designing pseudodeterministic algorithms for APEP leads to the solution of some open problems in complexity theory, including new Boolean circuit lower bounds. This equivalence also explains how multi-pseudodeterminism is connected to problems in SearchBPP. In particular, we show that if APEP has a pseudodeterministic algorithm, then every problem that admits a k(n)-pseudodeterministic algorithm (for any polynomial k) is in SearchBPP and admits a pseudodeterministic algorithm. Motivated by this connection, we also explore its connection to probabilistic search problems and establish that APEP is complete for certain notions of search problems in the context of pseudodeterminism. Our second contribution is establishing query complexity lower bounds for multi-pseudodeterministic computations. We prove that for every k ≥ 1, there exists a problem whose (k+1)-pseudodeterministic query complexity, in the uniform query model, is O(1) but has a k-pseudodeterministic query complexity of Ω(n), even in the more general nonadaptive query model. A key contribution of this part of the work is the utilization of Sperner’s lemma in establishing query complexity lower bounds.more » « less
-
Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated `lifted' function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. Several important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was shown by Lovett, Meka, Mertz, Pitassi and Zhang using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with the Index function of near-linear size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following; 1) The conjecture of Lovett et al. is false when the size of the Index gadget is logN−\omega(1). 2) Also, the Inner-Product function, which satisfies the disperser property at size O(logN), does not have this property when its size is log N−\omega(1). 3) Nonetheless, using Index gadgets of size at least 4, we prove a lifting theorem for a restricted class of communication protocols in which one of the players is limited to sending parities of its inputs. 4) Using the ideas from this lifting theorem, we derive a strong lifting theorem from decision tree size to parity decision tree size. We use this to derive a general lifting theorem in proof complexity from tree-resolution size to tree-like Res(\oplus) refutation size, which yields many new exponential lower bounds on such proofs.more » « less
An official website of the United States government

