skip to main content


This content will become publicly available on July 1, 2024

Title: Spoofing Cross-Entropy Measure in Boson Sampling
Cross-entropy (XE) measure is a widely used benchmark to demonstrate quantum computational advantage from sampling problems, such as random circuit sampling using superconducting qubits and boson sampling (BS). We present a heuristic classical algorithm that attains a better XE than the current BS experiments in a verifiable regime and is likely to attain a better XE score than the near-future BS experiments in a reasonable running time. The key idea behind the algorithm is that there exist distributions that correlate with the ideal BS probability distribution and that can be efficiently computed. The correlation and the computability of the distribution enable us to postselect heavy outcomes of the ideal probability distribution without computing the ideal probability, which essentially leads to a large XE. Our method scores a better XE than the recent Gaussian BS experiments when implemented at intermediate, verifiable system sizes. Much like current state-of-the-art experiments, we cannot verify that our spoofer works for quantum-advantage-size systems. However, we demonstrate that our approach works for much larger system sizes in fermion sampling, where we can efficiently compute output probabilities. Finally, we provide analytic evidence that the classical algorithm is likely to spoof noisy BS efficiently.  more » « less
Award ID(s):
1936118 2137642
NSF-PAR ID:
10482885
Author(s) / Creator(s):
; ;
Publisher / Repository:
Physical Review Letters
Date Published:
Journal Name:
Physical Review Letters
Volume:
131
Issue:
1
ISSN:
0031-9007
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Achieving quantum computational advantage requires solving a classically intractable problem on a quantum device. Natural proposals rely upon the intrinsic hardness of classically simulating quantum mechanics; however, verifying the output is itself classically intractable. On the other hand, certain quantum algorithms (e.g. prime factorization via Shor's algorithm) are efficiently verifiable, but require more resources than what is available on near-term devices. One way to bridge the gap between verifiability and implementation is to use "interactions" between a prover and a verifier. By leveraging cryptographic functions, such protocols enable the classical verifier to enforce consistency in a quantum prover's responses across multiple rounds of interaction. In this work, we demonstrate the first implementation of an interactive quantum advantage protocol, using an ion trap quantum computer. We execute two complementary protocols -- one based upon the learning with errors problem and another where the cryptographic construction implements a computational Bell test. To perform multiple rounds of interaction, we implement mid-circuit measurements on a subset of trapped ion qubits, with subsequent coherent evolution. For both protocols, the performance exceeds the asymptotic bound for classical behavior; maintaining this fidelity at scale would conclusively demonstrate verifiable quantum advantage. 
    more » « less
  2. Markov chain Monte Carlo algorithms have important applications in counting problems and in machine learning problems, settings that involve estimating quantities that are difficult to compute exactly. How much can quantum computers speed up classical Markov chain algorithms? In this work we consider the problem of speeding up simulated annealing algorithms, where the stationary distributions of the Markov chains are Gibbs distributions at temperatures specified according to an annealing schedule. We construct a quantum algorithm that both adaptively constructs an annealing schedule and quantum samples at each temperature. Our adaptive annealing schedule roughly matches the length of the best classical adaptive annealing schedules and improves on nonadaptive temperature schedules by roughly a quadratic factor. Our dependence on the Markov chain gap matches other quantum algorithms and is quadratically better than what classical Markov chains achieve. Our algorithm is the first to combine both of these quadratic improvements. Like other quantum walk algorithms, it also improves on classical algorithms by producing “qsamples” instead of classical samples. This means preparing quantum states whose amplitudes are the square roots of the target probability distribution. In constructing the annealing schedule we make use of amplitude estimation, and we introduce a method for making amplitude estimation nondestructive at almost no additional cost, a result that may have independent interest. Finally we demonstrate how this quantum simulated annealing algorithm can be applied to the problems of estimating partition functions and Bayesian inference. 
    more » « less
  3. Hazay, Carmit ; Stam, Martijn (Ed.)
    We study the computational problem of finding a shortest non-zero vector in a rotation of ℤ𝑛 , which we call ℤ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for ℤ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve ℤ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2𝑛+𝑜(𝑛) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for ℤ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if ℤ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that ℤ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce ℤ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of ℤ SVP from SVP. As a consequence of this reduction, we obtain a 2𝑛/2+𝑜(𝑛) -time algorithm for ℤ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of lattices—semi-stable lattices with not-too-large 𝜆1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) ℤ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of ℤ𝑛 from either a lattice with all non-zero vectors longer than 𝑛/log𝑛‾‾‾‾‾‾‾√ or a lattice with smoothing parameter significantly smaller than the smoothing parameter of ℤ𝑛 . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that “ℤ𝑛 has the largest smoothing parameter.” We show a distribution of bases 𝐁 for rotations of ℤ𝑛 such that, if ℤ SVP is hard for any input basis, then ℤ SVP is hard on input 𝐁 . This gives a satisfying theoretical resolution to the problem of sampling hard bases for ℤ𝑛 , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of ℤ𝑛 that are generated in different ways and how heuristic sieving algorithms perform on ℤ𝑛 . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the “provably hard” distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on ℤ𝑛 . 
    more » « less
  4. Given a data matrix 𝐷, a submatrix 𝑆 of 𝐷 is an order-preserving submatrix (OPSM) if there is a permutation of the columns of 𝑆, under which the entry values of each row in 𝑆 are strictly increasing. OPSM mining is widely used in real-life applications such as identifying coexpressed genes and finding customers with similar preference. However, noise is ubiquitous in real data matrices due to variable experimental conditions and measurement errors, which makes conventional OPSM mining algorithms inapplicable. No previous work on OPSM has ever considered uncertain value intervals using the well-established possible world semantics. We establish two different definitions of significant OPSMs based on the possible world semantics: (1) expected support-based and (2) probabilistic frequentness-based. An optimized dynamic programming approach is proposed to compute the probability that a row supports a particular column permutation, with a closed-form formula derived to efficiently handle the special case of uniform value distribution and an accurate cubic spline approximation approach that works well with any uncertain value distributions. To efficiently check the probabilistic frequentness, several effective pruning rules are designed to efficiently prune insignificant OPSMs; two approximation techniques based on the Poisson and Gaussian distributions, respectively, are proposed for further speedup. These techniques are integrated into our two OPSM mining algorithms, based on prefix-projection and Apriori, respectively. We further parallelize our prefix-projection-based mining algorithm using PrefixFPM, a recently proposed framework for parallel frequent pattern mining, and we achieve a good speedup with the number of CPU cores. Extensive experiments on real microarray data demonstrate that the OPSMs found by our algorithms have a much higher quality than those found by existing approaches. 
    more » « less
  5. Abstract

    Existing experimental demonstrations of quantum computational advantage have had the limitation that verifying the correctness of the quantum device requires exponentially costly classical computations. Here we propose and analyse an interactive protocol for demonstrating quantum computational advantage, which is efficiently classically verifiable. Our protocol relies on a class of cryptographic tools called trapdoor claw-free functions. Although this type of function has been applied to quantum advantage protocols before, our protocol employs a surprising connection to Bell’s inequality to avoid the need for a demanding cryptographic property called the adaptive hardcore bit, while maintaining essentially no increase in the quantum circuit complexity and no extra assumptions. Leveraging the relaxed cryptographic requirements of the protocol, we present two trapdoor claw-free function constructions, based on Rabin’s function and the Diffie–Hellman problem, which have not been used in this context before. We also present two independent innovations that improve the efficiency of our implementation and can be applied to other quantum cryptographic protocols. First, we give a scheme to discard so-called garbage bits, removing the need for reversibility in the quantum circuits. Second, we show a natural way of performing postselection that reduces the fidelity needed to demonstrate quantum advantage. Combining these results, we describe a blueprint for implementing our protocol on Rydberg atom-based quantum devices, using hardware-native operations that have already been demonstrated experimentally.

     
    more » « less