The epsilon-approximate degree, deg_epsilon(f), of a Boolean function f is the least degree of a real-valued polynomial that approximates f pointwise to within epsilon. A sound and complete certificate for approximate degree being at least k is a pair of probability distributions, also known as a dual polynomial, that are perfectly k-wise indistinguishable, but are distinguishable by f with advantage 1 - epsilon. Our contributions are: - We give a simple, explicit new construction of a dual polynomial for the AND function on n bits, certifying that its epsilon-approximate degree is Omega (sqrt{n log 1/epsilon}). This construction is the first to extend to the notion of weighted degree, and yields the first explicit certificate that the 1/3-approximate degree of any (possibly unbalanced) read-once DNF is Omega(sqrt{n}). It draws a novel connection between the approximate degree of AND and anti-concentration of the Binomial distribution. - We show that any pair of symmetric distributions on n-bit strings that are perfectly k-wise indistinguishable are also statistically K-wise indistinguishable with at most K^{3/2} * exp (-Omega (k^2/K)) error for all k < K <= n/64. This bound is essentially tight, and implies that any symmetric function f is a reconstruction function with constant advantage for a ramp secret sharing scheme that is secure against size-K coalitions with statistical error K^{3/2} * exp (-Omega (deg_{1/3}(f)^2/K)) for all values of K up to n/64 simultaneously. Previous secret sharing schemes required that K be determined in advance, and only worked for f=AND. Our analysis draws another new connection between approximate degree and concentration phenomena. As a corollary of this result, we show that for any d <= n/64, any degree d polynomial approximating a symmetric function f to error 1/3 must have coefficients of l_1-norm at least K^{-3/2} * exp ({Omega (deg_{1/3}(f)^2/d)}). We also show this bound is essentially tight for any d > deg_{1/3}(f). These upper and lower bounds were also previously only known in the case f=AND.
more »
« less
On the Rational Degree of Boolean Functions and Applications
On the Rational Degree of Boolean Functions and Applications Vishnu Iyer, Siddhartha Jain, Matt Kovacs-Deak, Vinayak M. Kumar, Luke Schaeffer, Daochen Wang, Michael Whitmeyer We study a natural complexity measure of Boolean functions known as the (exact) rational degree. For total functions f, it is conjectured that rdeg(f) is polynomially related to deg(f), where deg(f) is the Fourier degree. Towards this conjecture, we show that symmetric functions have rational degree at least deg(f)/2 and monotone functions have rational degree at least sqrt(deg(f)). We observe that both of these lower bounds are tight. In addition, we show that all read-once depth-d Boolean formulae have rational degree at least Ω(deg(f)1/d). Furthermore, we show that almost every Boolean function on n variables has rational degree at least n/2−O(sqrt(n)). In contrast to total functions, we exhibit partial functions that witness unbounded separations between rational and approximate degree, in both directions. As a consequence, we show that for quantum computers, post-selection and bounded-error are incomparable resources in the black-box model.
more »
« less
- Award ID(s):
- 2006359
- PAR ID:
- 10554970
- Publisher / Repository:
- arxiv.org
- Date Published:
- Journal Name:
- arXivorg
- Volume:
- arxiv:2310
- Issue:
- 2310.08004
- ISSN:
- 2331-8422
- Page Range / eLocation ID:
- 1-17
- Subject(s) / Keyword(s):
- computational complexity quantum computing Boolean functions
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)Boolean functions play an important role in many different areas of computer science. The _local sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$$ on an input $$x\in\{0,1\}^n$$ is the number of coordinates whose flip changes the value of $f(x)$, i.e., the number of i's such that $$f(x)\not=f(x+e_i)$$, where $$e_i$$ is the $$i$$-th unit vector. The _sensitivity_ of a Boolean function is its maximum local sensitivity. In other words, the sensitivity measures the robustness of a Boolean function with respect to a perturbation of its input. Another notion that measures the robustness is block sensitivity. The _local block sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$ on an input $$x\in\{0,1\}^n$$ is the number of disjoint subsets $$I$$ of $$\{1,..,n\}$$ such that flipping the coordinates indexed by $$I$$ changes the value of $f(x)$$, and the _block sensitivity_ of $$f$$ is its maximum local block sensitivity. Since the local block sensitivity is at least the local sensitivity for any input $$x$$, the block sensitivity of $$f$$ is at least the sensitivity of $$f$$.The next example demonstrates that the block sensitivity of a Boolean function is not linearly bounded by its sensitivity. Fix an integer $$k\ge 2$$ and define a Boolean function $$f:\{0,1\}^{2k^2}\to\{0,1\}$$ as follows: the coordinates of $$x\in\{0,1\}^{2k^2}$$ are split into $$k$$ blocks of size $2k$ each and $f(x)=1$ if and only if at least one of the blocks contains exactly two entries equal to one and these entries are consecutive. While the sensitivity of the function $$f$$ is $2k$, its block sensitivity is $k^2$. The Sensitivity Conjecture, made by Nisan and Szegedy in 1992, asserts that the block sensitivity of a Boolean function is polynomially bounded by its sensivity. The example above shows that the degree of such a polynomial must be at least two.The Sensitivity Conjecture has been recently proven by Huang in [Annals of Mathematics 190 (2019), 949-955](https://doi.org/10.4007/annals.2019.190.3.6). He proved the following combinatorial statement that implies the conjecture (with the degree of the polynomial equal to four): any subset of more than half of the vertices of the $$n$$-dimensional cube $$\{0,1\}^n$$ induces a subgraph that contains a vertex with degree at least $$\sqrt{n}$$. The present article extends this result as follows: every Cayley graph with the vertex set $$\{0,1\}^n$$ and any generating set of size $$d$$ (the vertex set is viewed as a vector space over the binary field) satisfies that any subset of more than half of its vertices induces a subgraph that contains a vertex of degree at least $$\sqrt{d}$$. In particular, when the generating set consists of the $$n$$ unit vectors, the Cayley graph is the $$n$$-dimensional hypercube.more » « less
-
Monotonicity testing of Boolean functions on the hypergrid, $$f:[n]^d \to \{0,1\}$$, is a classic topic in property testing. Determining the non-adaptive complexity of this problem is an important open question. For arbitrary $$n$$, [Black-Chakrabarty-Seshadhri, SODA 2020] describe a tester with query complexity $$\widetilde{O}(\varepsilon^{-4/3}d^{5/6})$$. This complexity is independent of $$n$$, but has a suboptimal dependence on $$d$$. Recently, [Braverman-Khot-Kindler-Minzer, ITCS 2023] and [Black-Chakrabarty-Seshadhri, STOC 2023] describe $$\widetilde{O}(\varepsilon^{-2} n^3\sqrt{d})$$ and $$\widetilde{O}(\varepsilon^{-2} n\sqrt{d})$$-query testers, respectively. These testers have an almost optimal dependence on $$d$$, but a suboptimal polynomial dependence on $$n$$. \smallskip In this paper, we describe a non-adaptive, one-sided monotonicity tester with query complexity $$O(\varepsilon^{-2} d^{1/2 + o(1)})$$, \emph{independent} of $$n$$. Up to the $$d^{o(1)}$$-factors, our result resolves the non-adaptive complexity of monotonicity testing for Boolean functions on hypergrids. The independence of $$n$$ yields a non-adaptive, one-sided $$O(\varepsilon^{-2} d^{1/2 + o(1)})$$-query monotonicity tester for Boolean functions $$f:\mathbb{R}^d \to \{0,1\}$$ associated with an arbitrary product measure.more » « less
-
The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone function (for example the class of unate functions consists of all the functions that can be made monotone by reorienting some of their coordinates [O'Donnell, 2014]). Specifically, we study the algorithmic problem of approximating NS_{delta}[f] for monotone f, given the promise that NS_{delta}[f] >= 1/n^{C} for constant C, and for delta in the range 1/n <= delta <= 1/2. For such f and delta, we give a randomized algorithm performing O((min(1,sqrt{n} delta log^{1.5} n))/(NS_{delta}[f]) poly (1/epsilon)) queries and approximating NS_{delta}[f] to within a multiplicative factor of (1 +/- epsilon). Given the same constraints on f and delta, we also prove a lower bound of Omega((min(1,sqrt{n} delta))/(NS_{delta}[f] * n^{xi})) on the query complexity of any algorithm that approximates NS_{delta}[f] to within any constant factor, where xi can be any positive constant. Thus, our algorithm's query complexity is close to optimal in terms of its dependence on n. We introduce a novel descending-ascending view of noise sensitivity, and use it as a central tool for the analysis of our algorithm. To prove lower bounds on query complexity, we develop a technique that reduces computational questions about query complexity to combinatorial questions about the existence of "thin" functions with certain properties. The existence of such "thin" functions is proved using the probabilistic method. These techniques also yield new lower bounds on the query complexity of approximating other fundamental properties of Boolean functions: the total influence and the bias.more » « less
-
Given a function f on (F_2_^n, we study the following problem. What is the largest affine subspace U such that when restricted to U, all the non-trivial Fourier coefficients of f are very small? For the natural class of bounded Fourier degree d functions f : (F_2)^n → [−1, 1], we show that there exists an affine subspace of dimension at least ˜Ω (n^(1/d!) k^(−2)), wherein all of f ’s nontrivial Fourier coefficients become smaller than 2^(−k) . To complement this result, we show the existence of degree d functions with coefficients larger than 2^(−d log n) when restricted to any affine subspace of dimension larger than Ω(dn^(1/(d−1))). In addition, we give explicit examples of functions with analogous but weaker properties. Along the way, we provide multiple characterizations of the Fourier coefficients of functions restricted to subspaces of (F_2)^n that may be useful in other contexts. Finally, we highlight applications and connections of our results to parity kill number and affine dispersers.more » « less
An official website of the United States government

