Following Chaudhuri, Sankaranarayanan, and Vardi, we say that a function f:[0,1]→[0,1] is r-regular if there is a Büchi automaton that accepts precisely the set of base r∈N representations of elements of the graph of f. We show that a continuous r-regular function f is locally affine away from a nowhere dense, Lebesgue null, subset of [0,1]. As a corollary we establish that every differentiable r-regular function is affine. It follows that checking whether an r-regular function is differentiable is in PSPACE. Our proofs rely crucially on connections between automata theory and metric geometry developed by Charlier, Leroy, and Rigo. more »« less
A new network with super-approximation power is introduced. This network is built with Floor (⌊x⌋) or ReLU (max{0,x}) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters N∈N+ and L∈N+, we show that Floor-ReLU networks with width max{d,5N+13} and depth 64dL+3 can uniformly approximate a Hölder function f on [0,1]d with an approximation error 3λdα/2N-αL, where α∈(0,1] and λ are the Hölder order and constant, respectively. More generally for an arbitrary continuous function f on [0,1]d with a modulus of continuity ωf(·), the constructive approximation rate is ωf(dN-L)+2ωf(d)N-L. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of ωf(r) as r→0 is moderate (e.g., ωf(r)≲rα for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially d times a function of N and L independent of d within the modulus of continuity.
Filmus, Y; Lifshitz, N; Minzer, D; Mossel, E.
(, STOC 2020: Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing)
A function f∶{0,1}n→ {0,1} is called an approximate AND-homomorphism if choosing x,y∈n uniformly at random, we have that f(x∧ y) = f(x)∧ f(y) with probability at least 1−ε, where x∧ y = (x1∧ y1,…,xn∧ yn). We prove that if f∶ {0,1}n → {0,1} is an approximate AND-homomorphism, then f is δ-close to either a constant function or an AND function, where δ(ε) → 0 as ε→ 0. This improves on a result of Nehama, who proved a similar statement in which δ depends on n. Our theorem implies a strong result on judgement aggregation in computational social choice. In the language of social choice, our result shows that if f is ε-close to satisfying judgement aggregation, then it is δ(ε)-close to an oligarchy (the name for the AND function in social choice theory). This improves on Nehama’s result, in which δ decays polynomially with n. Our result follows from a more general one, in which we characterize approximate solutions to the eigenvalue equation f = λ g, where is the downwards noise operator f(x) = y[f(x ∧ y)], f is [0,1]-valued, and g is {0,1}-valued. We identify all exact solutions to this equation, and show that any approximate solution in which f and λ g are close is close to an exact solution.
Alon, Noga; Zheng, Kai
(, Advances in Combinatorics)
null
(Ed.)
Boolean functions play an important role in many different areas of computer science. The _local sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$$ on an input $$x\in\{0,1\}^n$$ is the number of coordinates whose flip changes the value of $f(x)$, i.e., the number of i's such that $$f(x)\not=f(x+e_i)$$, where $$e_i$$ is the $$i$$-th unit vector. The _sensitivity_ of a Boolean function is its maximum local sensitivity. In other words, the sensitivity measures the robustness of a Boolean function with respect to a perturbation of its input. Another notion that measures the robustness is block sensitivity. The _local block sensitivity_ of a Boolean function $$f:\{0,1\}^n\to \{0,1\}$ on an input $$x\in\{0,1\}^n$$ is the number of disjoint subsets $$I$$ of $$\{1,..,n\}$$ such that flipping the coordinates indexed by $$I$$ changes the value of $f(x)$$, and the _block sensitivity_ of $$f$$ is its maximum local block sensitivity. Since the local block sensitivity is at least the local sensitivity for any input $$x$$, the block sensitivity of $$f$$ is at least the sensitivity of $$f$$.The next example demonstrates that the block sensitivity of a Boolean function is not linearly bounded by its sensitivity. Fix an integer $$k\ge 2$$ and define a Boolean function $$f:\{0,1\}^{2k^2}\to\{0,1\}$$ as follows: the coordinates of $$x\in\{0,1\}^{2k^2}$$ are split into $$k$$ blocks of size $2k$ each and $f(x)=1$ if and only if at least one of the blocks contains exactly two entries equal to one and these entries are consecutive. While the sensitivity of the function $$f$$ is $2k$, its block sensitivity is $k^2$. The Sensitivity Conjecture, made by Nisan and Szegedy in 1992, asserts that the block sensitivity of a Boolean function is polynomially bounded by its sensivity. The example above shows that the degree of such a polynomial must be at least two.The Sensitivity Conjecture has been recently proven by Huang in [Annals of Mathematics 190 (2019), 949-955](https://doi.org/10.4007/annals.2019.190.3.6). He proved the following combinatorial statement that implies the conjecture (with the degree of the polynomial equal to four): any subset of more than half of the vertices of the $$n$$-dimensional cube $$\{0,1\}^n$$ induces a subgraph that contains a vertex with degree at least $$\sqrt{n}$$. The present article extends this result as follows: every Cayley graph with the vertex set $$\{0,1\}^n$$ and any generating set of size $$d$$ (the vertex set is viewed as a vector space over the binary field) satisfies that any subset of more than half of its vertices induces a subgraph that contains a vertex of degree at least $$\sqrt{d}$$. In particular, when the generating set consists of the $$n$$ unit vectors, the Cayley graph is the $$n$$-dimensional hypercube.
Gnewuch, Michael; Kritzer, Peter; Owen, Art_B; Pan, Zexin
(, Information and Inference: A Journal of the IMA)
Abstract Let $$f:[0,1]^{d}\to{\mathbb{R}}$$ be a completely monotone integrand as defined by Aistleitner and Dick (2015, Acta Arithmetica, 167, 143–171) and let points $$\boldsymbol{x}_{0},\dots ,\boldsymbol{x}_{n-1}\in [0,1]^{d}$$ have a non-negative local discrepancy (NNLD) everywhere in $$[0,1]^{d}$$. We show how to use these properties to get a non-asymptotic and computable upper bound for the integral of $$f$$ over $$[0,1]^{d}$$. An analogous non-positive local discrepancy property provides a computable lower bound. It has been known since Gabai (1967, Illinois J. Math., 11, 1–12) that the two-dimensional Hammersley points in any base $$b\geqslant 2$$ have NNLD. Using the probabilistic notion of associated random variables, we generalize Gabai’s finding to digital nets in any base $$b\geqslant 2$$ and any dimension $$d\geqslant 1$$ when the generator matrices are permutation matrices. We show that permutation matrices cannot attain the best values of the digital net quality parameter when $$d\geqslant 3$$. As a consequence the computable absolutely sure bounds we provide come with less accurate estimates than the usual digital net estimates do in high dimensions. We are also able to construct high-dimensional rank one lattice rules that are NNLD. We show that those lattices do not have good discrepancy properties: any lattice rule with the NNLD property in dimension $$d\geqslant 2$$ either fails to be projection regular or has all its points on the main diagonal. Complete monotonicity is a very strict requirement that for some integrands can be mitigated via a control variate.
Pyne, Edward; Raz, Ran; Zhan, Wei:
(, 64th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2023)
Let L be a language that can be decided in linear space and let ϵ>0 be any constant. Let A be the exponential hardness assumption that for every n, membership in L for inputs of length n cannot be decided by circuits of size smaller than 2ϵn. We prove that for every function f:{0,1}∗→{0,1}, computable by a randomized logspace algorithm R, there exists a deterministic logspace algorithm D (attempting to compute f), such that on every input x of length n, the algorithm D outputs one of the following:1)The correct value f(x).2)The string: “I am unable to compute f(x) because the hardness assumption A is false”, followed by a (provenly correct) circuit of size smaller than 2ϵn′ for membership in L for inputs of length n′, for some n′=Θ(logn); that is, a circuit that refutes A. Moreover, D is explicitly constructed, given R.We note that previous works on the hardness-versus-randomness paradigm give derandomized algorithms that rely blindly on the hardness assumption. If the hardness assumption is false, the algorithms may output incorrect values, and thus a user cannot trust that an output given by the algorithm is correct. Instead, our algorithm D verifies the computation so that it never outputs an incorrect value. Thus, if D outputs a value for f(x), that value is certified to be correct. Moreover, if D does not output a value for f(x), it alerts that the hardness assumption was found to be false, and refutes the assumption.Our next result is a universal derandomizer for BPL (the class of problems solvable by bounded-error randomized logspace algorithms) 1 : We give a deterministic algorithm U that takes as an input a randomized logspace algorithm R and an input x and simulates the computation of R on x, deteriministically. Under the widely believed assumption BPL=L, the space ...
@article{osti_10177072,
place = {Country unknown/Code not available},
title = {Continuous Regular Functions},
url = {https://par.nsf.gov/biblio/10177072},
DOI = {https://doi.org/10.23638/LMCS-16(1:17)2020},
abstractNote = {Following Chaudhuri, Sankaranarayanan, and Vardi, we say that a function f:[0,1]→[0,1] is r-regular if there is a Büchi automaton that accepts precisely the set of base r∈N representations of elements of the graph of f. We show that a continuous r-regular function f is locally affine away from a nowhere dense, Lebesgue null, subset of [0,1]. As a corollary we establish that every differentiable r-regular function is affine. It follows that checking whether an r-regular function is differentiable is in PSPACE. Our proofs rely crucially on connections between automata theory and metric geometry developed by Charlier, Leroy, and Rigo.},
journal = {Logical methods in computer science},
volume = {16},
number = {1},
author = {Block Gorman, Alexi Block and Hieronymi, Philipp and Kaplan, Elliot and Meng, Ruoyu and Walsberg, Erik and Wang, Zihe and Xiong, Ziqin and Yang, Hongru},
}
Warning: Leaving National Science Foundation Website
You are now leaving the National Science Foundation website to go to a non-government website.
Website:
NSF takes no responsibility for and exercises no control over the views expressed or the accuracy of
the information contained on this site. Also be aware that NSF's privacy policy does not apply to this site.