skip to main content

Title: A Tight Lower Bound for Entropy Flattening
We study entropy flattening: Given a circuit C_X implicitly describing an n-bit source X (namely, X is the output of C_X on a uniform random input), construct another circuit C_Y describing a source Y such that (1) source Y is nearly flat (uniform on its support), and (2) the Shannon entropy of Y is monotonically related to that of X. The standard solution is to have C_Y evaluate C_X altogether Theta(n^2) times on independent inputs and concatenate the results (correctness follows from the asymptotic equipartition property). In this paper, we show that this is optimal among black-box constructions: Any circuit C_Y for entropy flattening that repeatedly queries C_X as an oracle requires Omega(n^2) queries. Entropy flattening is a component used in the constructions of pseudorandom generators and other cryptographic primitives from one-way functions [Johan Håstad et al., 1999; John Rompel, 1990; Thomas Holenstein, 2006; Iftach Haitner et al., 2006; Iftach Haitner et al., 2009; Iftach Haitner et al., 2013; Iftach Haitner et al., 2010; Salil P. Vadhan and Colin Jia Zheng, 2012]. It is also used in reductions between problems complete for statistical zero-knowledge [Tatsuaki Okamoto, 2000; Amit Sahai and Salil P. Vadhan, 1997; Oded Goldreich et al., 1999; Vadhan, more » 1999]. The Theta(n^2) query complexity is often the main efficiency bottleneck. Our lower bound can be viewed as a step towards proving that the current best construction of pseudorandom generator from arbitrary one-way functions by Vadhan and Zheng (STOC 2012) has optimal efficiency. « less
Authors:
; ; ;
Award ID(s):
1749750
Publication Date:
NSF-PAR ID:
10081825
Journal Name:
33rd Computational Complexity Conference (CCC 2018)
Volume:
102
Page Range or eLocation-ID:
23:1-23:28
ISSN:
1868-8969
Sponsoring Org:
National Science Foundation
More Like this
  1. Ultra-high-energy (UHE) photons are an important tool for studying the high-energy Universe. A plausible source of photons with exa-eV (EeV) energy is provided by UHE cosmic rays (UHECRs) undergoing the Greisen–Zatsepin–Kuzmin process (Greisen 1966; Zatsepin & Kuzmin 1966) or pair production process (Blumenthal 1970) on a cosmic background radiation. In this context, the EeV photons can be a probe of both UHECR mass composition and the distribution of their sources (Gelmini, Kalashev & Semikoz 2008; Hooper, Taylor & Sarkar 2011). At the same time, the possible flux of photons produced by UHE protons in the vicinity of their sources by pion photoproduction or inelastic nuclear collisions would be noticeable only for relatively near sources, as the attenuation length of UHE photons is smaller than that of UHE protons; see, for example, Bhattacharjee & Sigl (2000) for a review. There also exists a class of so-called top-down models of UHECR generation that efficiently produce the UHE photons, for instance by the decay of heavy dark-matter particles (Berezinsky, Kachelriess & Vilenkin 1997; Kuzmin & Rubakov 1998) or by the radiation from cosmic strings (Berezinsky, Blasi & Vilenkin 1998). The search for the UHE photons was shown to be the most sensitive methodmore »of indirect detection of heavy dark matter (Kalashev & Kuznetsov 2016, 2017; Kuznetsov 2017; Kachelriess, Kalashev & Kuznetsov 2018; Alcantara, Anchordoqui & Soriano 2019). Another fundamental physics scenario that could be tested with UHE photons (Fairbairn, Rashba & Troitsky 2011) is the photon mixing with axion-like particles (Raffelt & Stodolsky 1988), which could be responsible for the correlation of UHECR events with BL Lac type objects observed by the High Resolution Fly’s Eye (HiRes) experiment (Gorbunov et al. 2004; Abbasi et al. 2006). In most of these scenarios, a clustering of photon arrival directions, rather than diffuse distribution, is expected, so point-source searches can be a suitable test for photon - axion-like particle mixing models. Finally, UHE photons could also be used as a probe for the models of Lorentz-invariance violation (Coleman & Glashow 1999; Galaverni & Sigl 2008; Maccione, Liberati & Sigl 2010; Rubtsov, Satunin & Sibiryakov 2012, 2014). The Telescope Array (TA; Tokuno et al. 2012; Abu-Zayyad et al. 2013c) is the largest cosmic ray experiment in the Northern Hemisphere. It is located at 39.3° N, 112.9° W in Utah, USA. The observatory includes a surface detector array (SD) and 38 fluorescence telescopes grouped into three stations. The SD consists of 507 stations that contain plastic scintillators, each with an area of 3 m2 (SD stations). The stations are placed in the square grid with 1.2 km spacing and cover an area of ∼700 km2. The TA SD is capable of detecting extensive air showers (EASs) in the atmosphere caused by cosmic particles of EeV and higher energies. The TA SD has been operating since 2008 May. A hadron-induced EAS significantly differs from an EAS induced by a photon because the depth of the shower maximum Xmax for a photon shower is larger, and a photon shower contains fewer muons and has a more curved front (see Risse & Homola 2007 for a review). The TA SD stations are sensitive to both muon and electromagnetic components of the shower and therefore can be triggered by both hadron-induced and photon-induced EAS events. In the present study, we use 9 yr of TA SD data for a blind search for point sources of UHE photons. We utilize the statistics of the SD data, which benefit from a high duty cycle. The full Monte Carlo (MC) simulation of proton-induced and photon-induced EAS events allows us to perform the photon search up to the highest accessible energies, E ≳ 1020 eV. As the main tool for the present photon search, we use a multivariate analysis based on a number of SD parameters that make it possible to distinguish between photon and hadron primaries. While searches for diffuse UHE photons were performed by several EAS experiments, including Haverah Park (Ave et al. 2000), AGASA (Shinozaki et al. 2002; Risse et al. 2005), Yakutsk (Rubtsov et al. 2006; Glushkov et al. 2007, 2010), Pierre Auger (Abraham et al. 2007, 2008a; Bleve 2016; Aab et al. 2017c) and TA (Abu-Zayyad et al. 2013b; Abbasi et al. 2019a), the search for point sources of UHE photons has been done only by the Pierre Auger Observatory (Aab et al. 2014, 2017a). The latter searches were based on hybrid data and were limited to the 1017.3 < E < 1018.5 eV energy range. In the present paper, we use the TA SD data alone. We perform the searches in five energy ranges: E > 1018, E > 1018.5, E > 1019, E > 1019.5 and E > 1020 eV. We find no significant evidence of photon point sources in all energy ranges and we set the point-source flux upper limits from each direction in the TA field of view (FOV). The search for unspecified neutral particles was also previously performed by the TA (Abbasi et al. 2015). The limit on the point-source flux of neutral particles obtained in that work is close to the present photon point-source flux limits.« less
  2. The approximate degree of a Boolean function f is the least degree of a real polynomial that approximates f pointwise to error at most 1/3. The approximate degree of f is known to be a lower bound on the quantum query complexity of f (Beals et al., FOCS 1998 and J. ACM 2001). We find tight or nearly tight bounds on the approximate degree and quantum query complexities of several basic functions. Specifically, we show the following. k-Distinctness: For any constant k, the approximate degree and quantum query complexity of the k-distinctness function is Ω(n3/4−1/(2k)). This is nearly tight for large k, as Belovs (FOCS 2012) has shown that for any constant k, the approximate degree and quantum query complexity of k-distinctness is O(n3/4−1/(2k+2−4)). Image size testing: The approximate degree and quantum query complexity of testing the size of the image of a function [n]→[n] is Ω~(n1/2). This proves a conjecture of Ambainis et al. (SODA 2016), and it implies tight lower bounds on the approximate degree and quantum query complexity of the following natural problems. k-Junta testing: A tight Ω~(k1/2) lower bound for k-junta testing, answering the main open question of Ambainis et al. (SODA 2016). Statistical distance frommore »uniform: A tight Ω~(n1/2) lower bound for approximating the statistical distance of a distribution from uniform, answering the main question left open by Bravyi et al. (STACS 2010 and IEEE Trans. Inf. Theory 2011). Shannon entropy: A tight Ω~(n1/2) lower bound for approximating Shannon entropy up to a certain additive constant, answering a question of Li and Wu (2017). Surjectivity: The approximate degree of the surjectivity function is Ω~(n3/4). The best prior lower bound was Ω(n2/3). Our result matches an upper bound of O~(n3/4) due to Sherstov (STOC 2018), which we reprove using different techniques. The quantum query complexity of this function is known to be Θ(n) (Beame and Machmouchi, Quantum Inf. Comput. 2012 and Sherstov, FOCS 2015). Our upper bound for surjectivity introduces new techniques for approximating Boolean functions by low-degree polynomials. Our lower bounds are proved by significantly refining techniques recently introduced by Bun and Thaler (FOCS 2017).« less
  3. Abstract Expert testimony varies in scientific quality and jurors have a difficult time evaluating evidence quality (McAuliff et al., 2009). In the current study, we apply Fuzzy Trace Theory principles, examining whether visual and gist aids help jurors calibrate to the strength of scientific evidence. Additionally we were interested in the role of jurors’ individual differences in scientific reasoning skills in their understanding of case evidence. Contrary to our preregistered hypotheses, there was no effect of evidence condition or gist aid on evidence understanding. However, individual differences between jurors’ numeracy skills predicted evidence understanding. Summary Poor-quality expert evidence is sometimes admitted into court (Smithburn, 2004). Jurors’ calibration to evidence strength varies widely and is not robustly understood. For instance, previous research has established jurors lack understanding of the role of control groups, confounds, and sample sizes in scientific research (McAuliff, Kovera, & Nunez, 2009; Mill, Gray, & Mandel, 1994). Still others have found that jurors can distinguish weak from strong evidence when the evidence is presented alone, yet not when simultaneously presented with case details (Smith, Bull, & Holliday, 2011). This research highlights the need to present evidence to jurors in a way they can understand. Fuzzy Trace Theory purportsmore »that people encode information in exact, verbatim representations and through “gist” representations, which represent summary of meaning (Reyna & Brainerd, 1995). It is possible that the presenting complex scientific evidence to people with verbatim content or appealing to the gist, or bottom-line meaning of the information may influence juror understanding of that evidence. Application of Fuzzy Trace Theory in the medical field has shown that gist representations are beneficial for helping laypeople better understand risk and benefits of medical treatment (Brust-Renck, Reyna, Wilhelms, & Lazar, 2016). Yet, little research has applied Fuzzy Trace Theory to information comprehension and application within the context of a jury (c.f. Reyna et. al., 2015). Additionally, it is likely that jurors’ individual characteristics, such as scientific reasoning abilities and cognitive tendencies, influence their ability to understand and apply complex scientific information (Coutinho, 2006). Methods The purpose of this study was to examine how jurors calibrate to the strength of scientific information, and whether individual difference variables and gist aids inspired by Fuzzy Trace Theory help jurors better understand complicated science of differing quality. We used a 2 (quality of scientific evidence: high vs. low) x 2 (decision aid to improve calibration - gist information vs. no gist information), between-subjects design. All hypotheses were preregistered on the Open Science Framework. Jury-eligible community participants (430 jurors across 90 juries; Mage = 37.58, SD = 16.17, 58% female, 56.93% White). Each jury was randomly assigned to one of the four possible conditions. Participants were asked to individually fill out measures related to their scientific reasoning skills prior to watching a mock jury trial. The trial was about an armed bank robbery and consisted of various pieces of testimony and evidence (e.g. an eyewitness testimony, police lineup identification, and a sweatshirt found with the stolen bank money). The key piece of evidence was mitochondrial DNA (mtDNA) evidence collected from hair on a sweatshirt (materials from Hans et al., 2011). Two experts presented opposing opinions about the scientific evidence related to the mtDNA match estimate for the defendant’s identification. The quality and content of this mtDNA evidence differed based on the two conditions. The high quality evidence condition used a larger database than the low quality evidence to compare to the mtDNA sample and could exclude a larger percentage of people. In the decision aid condition, experts in the gist information group presented gist aid inspired visuals and examples to help explain the proportion of people that could not be excluded as a match. Those in the no gist information group were not given any aid to help them understand the mtDNA evidence presented. After viewing the trial, participants filled out a questionnaire on how well they understood the mtDNA evidence and their overall judgments of the case (e.g. verdict, witness credibility, scientific evidence strength). They filled this questionnaire out again after a 45-minute deliberation. Measures We measured Attitudes Toward Science (ATS) with indices of scientific promise and scientific reservations (Hans et al., 2011; originally developed by National Science Board, 2004; 2006). We used Drummond and Fischhoff’s (2015) Scientific Reasoning Scale (SRS) to measure scientific reasoning skills. Weller et al.’s (2012) Numeracy Scale (WNS) measured proficiency in reasoning with quantitative information. The NFC-Short Form (Cacioppo et al., 1984) measured need for cognition. We developed a 20-item multiple-choice comprehension test for the mtDNA scientific information in the cases (modeled on Hans et al., 2011, and McAuliff et al., 2009). Participants were shown 20 statements related to DNA evidence and asked whether these statements were True or False. The test was then scored out of 20 points. Results For this project, we measured calibration to the scientific evidence in a few different ways. We are building a full model with these various operationalizations to be presented at APLS, but focus only on one of the calibration DVs (i.e., objective understanding of the mtDNA evidence) in the current proposal. We conducted a general linear model with total score on the mtDNA understanding measure as the DV and quality of scientific evidence condition, decision aid condition, and the four individual difference measures (i.e., NFC, ATS, WNS, and SRS) as predictors. Contrary to our main hypotheses, neither evidence quality nor decision aid condition affected juror understanding. However, the individual difference variables did: we found significant main effects for Scientific Reasoning Skills, F(1, 427) = 16.03, p <.001, np2 = .04, Weller Numeracy Scale, F(1, 427) = 15.19, p <.001, np2 = .03, and Need for Cognition, F(1, 427) = 16.80, p <.001, np2 = .04, such that those who scored higher on these measures displayed better understanding of the scientific evidence. In addition there was a significant interaction of evidence quality condition and scores on the Weller’s Numeracy Scale, F(1, 427) = 4.10, p = .04, np2 = .01. Further results will be discussed. Discussion These data suggest jurors are not sensitive to differences in the quality of scientific mtDNA evidence, and also that our attempt at helping sensitize them with Fuzzy Trace Theory-inspired aids did not improve calibration. Individual scientific reasoning abilities and general cognition styles were better predictors of understanding this scientific information. These results suggest a need for further exploration of approaches to help jurors differentiate between high and low quality evidence. Note: The 3rd author was supported by an AP-LS AP Award for her role in this research. Learning Objective: Participants will be able to describe how individual differences in scientific reasoning skills help jurors understand complex scientific evidence.« less
  4. Mathematics is an important tool in engineering practice, as mathematical rules govern many designed systems (e.g., Nathan et al., 2013; Nathan et al., 2017). Investigations of structural engineers suggest that mathematical modelling is ubiquitous in their work, but the nature of the tasks they confront is not well-represented in the K-12 classroom (e.g., Gainsburg, 2006). This follows a larger literature base suggesting that school mathematics is often inauthentic and does represent how mathematics is used in practice. At the same time, algebra is a persistent gatekeeper to careers in engineering (e.g., Harackiewicz et al., 2012; Olson & Riordan, 2012). In the present study, we interviewed 12 engineers, asking them a series of questions about how they use specific kinds of algebraic function (e.g., linear, exponential, quadratic) in their work. The purpose of these interviews was to use the responses to create mathematical scenarios for College Algebra activities that would be personalized to community college students’ career interests. This curriculum would represent how algebra is used in practice by STEM professionals. However, our results were not what we expected. In this paper, we discuss three major themes that arose from qualitative analyses of the interviews. First, we found that engineers resoundinglymore »endorsed the importance of College Algebra concepts for their day-to-day work, and uniformly stated that math was vital to engineering. However, the second theme was that the engineers struggled to describe how they used functions more complex than linear (i.e., y=mx+b) in their work. Students typically learn about linear functions prior to College Algebra, and in College Algebra explore more complex functions like polynomial, logarithmic, and exponential. Third, we found that engineers rarely use the explicit algebraic form of an algebraic function (e.g., y=3x+5), and instead rely on tables, graphs, informal arithmetic, and computerized computation systems where the equation is invisible. This was surprising, given that the bulk of the College Algebra course involves learning how to use and manipulate these formal expressions, learning skills like factoring, simplifying, solving, and interpreting parameters. We also found that these trends for engineers followed trends we saw in our larger sample where we interviewed professionals from across STEM fields. This study calls into question the gatekeeping role of formal algebraic courses like College Algebra for STEM careers. If engineers don’t actually use 75% of the content in these courses, why are they required? One reason might be that the courses are simply outdated, or arguments might be made that learning mathematics builds more general modelling and problem-solving skills. However, research from educational psychology on the difficulty of transfer would strongly refute this point – people tend to learn things that are very specific. Another reason to consider is that formal mathematics courses like advanced algebra have emerged as a very convenient mechanism to filter people by race, gender, and socioeconomic background, and to promote the maintenance of the “status quo” inequality in STEM fields. This is a critical issue to investigate for the future of the field of engineering as a whole.« less
  5. In a recent work (Ghazi et al., SODA 2016), the authors with Komargodski and Kothari initiated the study of communication with contextual uncertainty, a setup aiming to understand how efficient communication is possible when the communicating parties imperfectly share a huge context. In this setting, Alice is given a function f and an input string x, and Bob is given a function g and an input string y. The pair (x,y) comes from a known distribution mu and f and g are guaranteed to be close under this distribution. Alice and Bob wish to compute g(x,y) with high probability. The lack of agreement between Alice and Bob on the function that is being computed captures the uncertainty in the context. The previous work showed that any problem with one-way communication complexity k in the standard model (i.e., without uncertainty, in other words, under the promise that f=g) has public-coin communication at most O(k(1+I)) bits in the uncertain case, where I is the mutual information between x and y. Moreover, a lower bound of Omega(sqrt{I}) bits on the public-coin uncertain communication was also shown. However, an important question that was left open is related to the power that public randomness bringsmore »to uncertain communication. Can Alice and Bob achieve efficient communication amid uncertainty without using public randomness? And how powerful are public-coin protocols in overcoming uncertainty? Motivated by these two questions: - We prove the first separation between private-coin uncertain communication and public-coin uncertain communication. Namely, we exhibit a function class for which the communication in the standard model and the public-coin uncertain communication are O(1) while the private-coin uncertain communication is a growing function of n (the length of the inputs). This lower bound (proved with respect to the uniform distribution) is in sharp contrast with the case of public-coin uncertain communication which was shown by the previous work to be within a constant factor from the certain communication. This lower bound also implies the first separation between public-coin uncertain communication and deterministic uncertain communication. Interestingly, we also show that if Alice and Bob imperfectly share a sequence of random bits (a setup weaker than public randomness), then achieving a constant blow-up in communication is still possible. - We improve the lower-bound of the previous work on public-coin uncertain communication. Namely, we exhibit a function class and a distribution (with mutual information I approx n) for which the one-way certain communication is k bits but the one-way public-coin uncertain communication is at least Omega(sqrt{k}*sqrt{I}) bits. Our proofs introduce new problems in the standard communication complexity model and prove lower bounds for these problems. Both the problems and the lower bound techniques may be of general interest.« less