The noise sensitivity of a Boolean function f: {0,1}^n - > {0,1} is one of its fundamental properties. For noise parameter delta, the noise sensitivity is denoted as NS_{delta}[f]. This quantity is defined as follows: First, pick x = (x_1,...,x_n) uniformly at random from {0,1}^n, then pick z by flipping each x_i independently with probability delta. NS_{delta}[f] is defined to equal Pr [f(x) != f(z)]. Much of the existing literature on noise sensitivity explores the following two directions: (1) Showing that functions with low noise-sensitivity are structured in certain ways. (2) Mathematically showing that certain classes of functions have low noise sensitivity. Combined, these two research directions show that certain classes of functions have low noise sensitivity and therefore have useful structure. The fundamental importance of noise sensitivity, together with this wealth of structural results, motivates the algorithmic question of approximating NS_{delta}[f] given an oracle access to the function f. We show that the standard sampling approach is essentially optimal for general Boolean functions. Therefore, we focus on estimating the noise sensitivity of monotone functions, which form an important subclass of Boolean functions, since many functions of interest are either monotone or can be simply transformed into a monotone functionmore »
Biasing Boolean Functions and Collective Coin-Flipping Protocols over Arbitrary Product Distributions
The seminal result of Kahn, Kalai and Linial shows that a coalition of O(n/(log n)) players can bias the outcome of any Boolean function {0,1}^n -> {0,1} with respect to the uniform measure. We extend their result to arbitrary product measures on {0,1}^n, by combining their argument with a completely different argument that handles very biased input bits. We view this result as a step towards proving a conjecture of Friedgut, which states that Boolean functions on the continuous cube [0,1]^n (or, equivalently, on {1,...,n}^n) can be biased using coalitions of o(n) players. This is the first step taken in this direction since Friedgut proposed the conjecture in 2004. Russell, Saks and Zuckerman extended the result of Kahn, Kalai and Linial to multi-round protocols, showing that when the number of rounds is o(log^* n), a coalition of o(n) players can bias the outcome with respect to the uniform measure. We extend this result as well to arbitrary product measures on {0,1}^n. The argument of Russell et al. relies on the fact that a coalition of o(n) players can boost the expectation of any Boolean function from epsilon to 1-epsilon with respect to the uniform measure. This fails for general product more »
- Award ID(s):
- 1705028
- Publication Date:
- NSF-PAR ID:
- 10107435
- Journal Name:
- 46th International Colloquium on Automata, Languages and Programming (ICALP)
- ISSN:
- 1868-8969
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract We show that a very simple pseudorandom generator fools intersections of k linear threshold functions (LTFs) and arbitrary functions of k LTFs over n-dimensional Gaussian space. The two analyses of our PRG (for intersections versus arbitrary functions of LTFs) are quite different from each other and from previous analyses of PRGs for functions of halfspaces. Our analysis for arbitrary functions of LTFs establishes bounds on the Wasserstein distance between Gaussian random vectors with similar covariance matrices, and combines these bounds with a conversion from Wasserstein distance to "union-of-orthants" distance from [Xi Chen et al., 2014]. Our analysis for intersections of LTFs uses extensions of the classical Sudakov-Fernique type inequalities, which give bounds on the difference between the expectations of the maxima of two Gaussian random vectors with similar covariance matrices. For all values of k, our generator has seed length O(log n) + poly(k) for arbitrary functions of k LTFs and O(log n) + poly(log k) for intersections of k LTFs. The best previous result, due to [Gopalan et al., 2010], only gave such PRGs for arbitrary functions of k LTFs when k=O(log log n) and for intersections of k LTFs when k=O((log n)/(log log n)). Thus our PRGmore »
-
Boolean functions play an important role in many different areas of computer science. The _local sensitivity_ of a Boolean function $f:\{0,1\}^n\to \{0,1\}$ on an input $x\in\{0,1\}^n$ is the number of coordinates whose flip changes the value of $f(x)$, i.e., the number of i's such that $f(x)\not=f(x+e_i)$, where $e_i$ is the $i$-th unit vector. The _sensitivity_ of a Boolean function is its maximum local sensitivity. In other words, the sensitivity measures the robustness of a Boolean function with respect to a perturbation of its input. Another notion that measures the robustness is block sensitivity. The _local block sensitivity_ of a Boolean function $f:\{0,1\}^n\to \{0,1\}$ on an input $x\in\{0,1\}^n$ is the number of disjoint subsets $I$ of $\{1,..,n\}$ such that flipping the coordinates indexed by $I$ changes the value of $f(x)$, and the _block sensitivity_ of $f$ is its maximum local block sensitivity. Since the local block sensitivity is at least the local sensitivity for any input $x$, the block sensitivity of $f$ is at least the sensitivity of $f$.The next example demonstrates that the block sensitivity of a Boolean function is not linearly bounded by its sensitivity. Fix an integer $k\ge 2$ and define a Boolean function $f:\{0,1\}^{2k^2}\to\{0,1\}$ as follows: themore »
-
Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated `lifted' function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. Several important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was shown by Lovett, Meka, Mertz, Pitassi and Zhang using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with the Index function of near-linear size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following; 1) The conjecture of Lovett et al. is false when the size of the Index gadget is logN−\omega(1). 2) Also, the Inner-Product function, which satisfies the disperser property at size O(logN), does not have this propertymore »
-
Query-to-communication lifting theorems, which connect the query complexity of a Boolean function to the communication complexity of an associated ‘lifted’ function obtained by composing the function with many copies of another function known as a gadget, have been instrumental in resolving many open questions in computational complexity. A number of important complexity questions could be resolved if we could make substantial improvements in the input size required for lifting with the Index function, which is a universal gadget for lifting, from its current near-linear size down to polylogarithmic in the number of inputs N of the original function or, ideally, constant. The near-linear size bound was recently shown by Lovett, Meka, Mertz, Pitassi and Zhang using a recent breakthrough improvement on the Sunflower Lemma to show that a certain graph associated with an Index function of that size is a disperser. They also stated a conjecture about the Index function that is essential for further improvements in the size required for lifting with Index using current techniques. In this paper we prove the following; • The conjecture of Lovett et al. is false when the size of the Index gadget is less than logarithmic in N. • The same limitationmore »