skip to main content


Title: A Tight Lower Bound for Entropy Flattening
We study entropy flattening: Given a circuit C_X implicitly describing an n-bit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries. Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zero-knowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan, 1999]. The Theta(n^2) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency.  more » « less
Award ID(s):
1749750
NSF-PAR ID:
10081825
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
33rd Computational Complexity Conference (CCC 2018)
Volume:
102
ISSN:
1868-8969
Page Range / eLocation ID:
23:1-23:28
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We introduce KL-hardness, a new notion of hardness for search problems which on the one hand is satisfied by all one-way functions and on the other hand implies both next-block pseudoentropy and inaccessible-entropy, two forms of computational entropy used in recent constructions of pseudorandom generators and statistically hiding commitment schemes, respectively. Thus, KL-hardness unifies the latter two notions of computational entropy and sheds light on the apparent "duality" between them. Additionally, it yields a more modular and illuminating proof that one-way functions imply next-block inaccessible entropy, similar in structure to the proof that one-way functions imply next-block pseudoentropy (Vadhan and Zheng, STOC '12). 
    more » « less
  2. Ta-Shma, Amnon (Ed.)
    Locally Decodable Codes (LDCs) are error-correcting codes C:Σⁿ → Σ^m, encoding messages in Σⁿ to codewords in Σ^m, with super-fast decoding algorithms. They are important mathematical objects in many areas of theoretical computer science, yet the best constructions so far have codeword length m that is super-polynomial in n, for codes with constant query complexity and constant alphabet size. In a very surprising result, Ben-Sasson, Goldreich, Harsha, Sudan, and Vadhan (SICOMP 2006) show how to construct a relaxed version of LDCs (RLDCs) with constant query complexity and almost linear codeword length over the binary alphabet, and used them to obtain significantly-improved constructions of Probabilistically Checkable Proofs. In this work, we study RLDCs in the standard Hamming-error setting, and introduce their variants in the insertion and deletion (Insdel) error setting. Standard LDCs for Insdel errors were first studied by Ostrovsky and Paskin-Cherniavsky (Information Theoretic Security, 2015), and are further motivated by recent advances in DNA random access bio-technologies. Our first result is an exponential lower bound on the length of Hamming RLDCs making 2 queries (even adaptively), over the binary alphabet. This answers a question explicitly raised by Gur and Lachish (SICOMP 2021) and is the first exponential lower bound for RLDCs. Combined with the results of Ben-Sasson et al., our result exhibits a "phase-transition"-type behavior on the codeword length for some constant-query complexity. We achieve these lower bounds via a transformation of RLDCs to standard Hamming LDCs, using a careful analysis of restrictions of message bits that fix codeword bits. We further define two variants of RLDCs in the Insdel-error setting, a weak and a strong version. On the one hand, we construct weak Insdel RLDCs with almost linear codeword length and constant query complexity, matching the parameters of the Hamming variants. On the other hand, we prove exponential lower bounds for strong Insdel RLDCs. These results demonstrate that, while these variants are equivalent in the Hamming setting, they are significantly different in the insdel setting. Our results also prove a strict separation between Hamming RLDCs and Insdel RLDCs. 
    more » « less
  3. We present new constructions of pseudorandom generators (PRGs) for two of the most widely studied non-uniform circuit classes in complexity theory. Our main result is a construction of the first non-trivial PRG for linear threshold (LTF) circuits of arbitrary constant depth and super-linear size. This PRG fools circuits with depth d∈N and n1+δ wires, where δ=2−O(d) , using seed length O(n1−δ) and with error 2−nδ . This tightly matches the best known lower bounds for this circuit class. As a consequence of our result, all the known hardness for LTF circuits has now effectively been translated into pseudorandomness. This brings the extensive effort in the last decade to construct PRGs and deterministic circuit-analysis algorithms for this class to the point where any subsequent improvement would yield breakthrough lower bounds. Our second contribution is a PRG for De Morgan formulas of size s whose seed length is s1/3+o(1)⋅polylog(1/ϵ) for error ϵ . In particular, our PRG can fool formulas of sub-cubic size s=n3−Ω(1) with an exponentially small error ϵ=exp(−nΩ(1)) . This significantly improves the inverse-polynomial error of the previous state-of-the-art for such formulas by Impagliazzo, Meka, and Zuckerman (FOCS 2012, JACM 2019), and again tightly matches the best currently-known lower bounds for this class. In both settings, a key ingredient in our constructions is a pseudorandom restriction procedure that has tiny failure probability, but simplifies the function to a non-natural “hybrid computational model” that combines several computational models. 
    more » « less
  4. Ultra-high-energy (UHE) photons are an important tool for studying the high-energy Universe. A plausible source of photons with exa-eV (EeV) energy is provided by UHE cosmic rays (UHECRs) undergoing the Greisen–Zatsepin–Kuzmin process (Greisen 1966; Zatsepin & Kuzmin 1966) or pair production process (Blumenthal 1970) on a cosmic background radiation. In this context, the EeV photons can be a probe of both UHECR mass composition and the distribution of their sources (Gelmini, Kalashev & Semikoz 2008; Hooper, Taylor & Sarkar 2011). At the same time, the possible flux of photons produced by UHE protons in the vicinity of their sources by pion photoproduction or inelastic nuclear collisions would be noticeable only for relatively near sources, as the attenuation length of UHE photons is smaller than that of UHE protons; see, for example, Bhattacharjee & Sigl (2000) for a review. There also exists a class of so-called top-down models of UHECR generation that efficiently produce the UHE photons, for instance by the decay of heavy dark-matter particles (Berezinsky, Kachelriess & Vilenkin 1997; Kuzmin & Rubakov 1998) or by the radiation from cosmic strings (Berezinsky, Blasi & Vilenkin 1998). The search for the UHE photons was shown to be the most sensitive method of indirect detection of heavy dark matter (Kalashev & Kuznetsov 2016, 2017; Kuznetsov 2017; Kachelriess, Kalashev & Kuznetsov 2018; Alcantara, Anchordoqui & Soriano 2019). Another fundamental physics scenario that could be tested with UHE photons (Fairbairn, Rashba & Troitsky 2011) is the photon mixing with axion-like particles (Raffelt & Stodolsky 1988), which could be responsible for the correlation of UHECR events with BL Lac type objects observed by the High Resolution Fly’s Eye (HiRes) experiment (Gorbunov et al. 2004; Abbasi et al. 2006). In most of these scenarios, a clustering of photon arrival directions, rather than diffuse distribution, is expected, so point-source searches can be a suitable test for photon - axion-like particle mixing models. Finally, UHE photons could also be used as a probe for the models of Lorentz-invariance violation (Coleman & Glashow 1999; Galaverni & Sigl 2008; Maccione, Liberati & Sigl 2010; Rubtsov, Satunin & Sibiryakov 2012, 2014). The Telescope Array (TA; Tokuno et al. 2012; Abu-Zayyad et al. 2013c) is the largest cosmic ray experiment in the Northern Hemisphere. It is located at 39.3° N, 112.9° W in Utah, USA. The observatory includes a surface detector array (SD) and 38 fluorescence telescopes grouped into three stations. The SD consists of 507 stations that contain plastic scintillators, each with an area of 3 m2 (SD stations). The stations are placed in the square grid with 1.2 km spacing and cover an area of ∼700 km2. The TA SD is capable of detecting extensive air showers (EASs) in the atmosphere caused by cosmic particles of EeV and higher energies. The TA SD has been operating since 2008 May. A hadron-induced EAS significantly differs from an EAS induced by a photon because the depth of the shower maximum Xmax for a photon shower is larger, and a photon shower contains fewer muons and has a more curved front (see Risse & Homola 2007 for a review). The TA SD stations are sensitive to both muon and electromagnetic components of the shower and therefore can be triggered by both hadron-induced and photon-induced EAS events. In the present study, we use 9 yr of TA SD data for a blind search for point sources of UHE photons. We utilize the statistics of the SD data, which benefit from a high duty cycle. The full Monte Carlo (MC) simulation of proton-induced and photon-induced EAS events allows us to perform the photon search up to the highest accessible energies, E ≳ 1020 eV. As the main tool for the present photon search, we use a multivariate analysis based on a number of SD parameters that make it possible to distinguish between photon and hadron primaries. While searches for diffuse UHE photons were performed by several EAS experiments, including Haverah Park (Ave et al. 2000), AGASA (Shinozaki et al. 2002; Risse et al. 2005), Yakutsk (Rubtsov et al. 2006; Glushkov et al. 2007, 2010), Pierre Auger (Abraham et al. 2007, 2008a; Bleve 2016; Aab et al. 2017c) and TA (Abu-Zayyad et al. 2013b; Abbasi et al. 2019a), the search for point sources of UHE photons has been done only by the Pierre Auger Observatory (Aab et al. 2014, 2017a). The latter searches were based on hybrid data and were limited to the 1017.3 < E < 1018.5 eV energy range. In the present paper, we use the TA SD data alone. We perform the searches in five energy ranges: E > 1018, E > 1018.5, E > 1019, E > 1019.5 and E > 1020 eV. We find no significant evidence of photon point sources in all energy ranges and we set the point-source flux upper limits from each direction in the TA field of view (FOV). The search for unspecified neutral particles was also previously performed by the TA (Abbasi et al. 2015). The limit on the point-source flux of neutral particles obtained in that work is close to the present photon point-source flux limits. 
    more » « less
  5. null (Ed.)
    The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We find tight or nearly tight bounds on the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following. k-Distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). Image size testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n]→[n] is Ω~(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. k-Junta testing: A tight Ω~(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). Statistical distance from uniform: A tight Ω~(n1/2) lower bound for approximating the statistical distance of a distribution from uniform, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). Shannon entropy: A tight Ω~(n1/2) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). Surjectivity: The approximate degree of the surjectivity function is Ω~(n3/4). The best prior lower bound was Ω(n2/3). Our result matches an upper bound of O~(n3/4) due to Sherstov (STOC 2018), which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for surjectivity introduces new techniques for approximating Boolean functions by low-degree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017). 
    more » « less