skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1755800

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Estimation of Shannon and Rényi entropies of unknown discrete distributions is a fundamental problem in statistical property testing. In this paper, we give the first quantum algorithms for estimating α-Rényi entropies (Shannon entropy being 1-Rényi entropy). In particular, we demonstrate a quadratic quantum speedup for Shannon entropy estimation and a generic quantum speedup for α-Rényi entropy estimation for all α ≥ 0, including tight bounds for the Shannon entropy, the Hartley entropy (α = 0), and the collision entropy (α = 2). We also provide quantum upper bounds for estimating min-entropy (α = +∞) as well as the Kullback-Leibler divergence. We complement our results with quantum lower bounds on α- Rényi entropy estimation for all α ≥ 0. Our approach is inspired by the pioneering work of Bravyi, Harrow, and Hassidim (BHH) [1], however, with many new technical ingredients: (1) we improve the error dependence of the BHH framework by a fine-tuned error analysis together with Montanaro’s approach to estimating the expected output of quantum subroutines [2] for α = 0, 1; (2) we develop a procedure, similar to cooling schedules in simulated annealing, for general α ≥ 0; (3) in the cases of integer α ≥ 2 and α = +∞, we reduce the entropy estimation problem to the α-distinctness and the ⌈log n⌉-distinctness problems, respectively. 
    more » « less