Let L be a language that can be decided in linear space and let ϵ>0 be any constant. Let A be the exponential hardness assumption that for every n, membership in L for inputs of length n cannot be decided by circuits of size smaller than 2ϵn. We prove that for every function f:{0,1}∗→{0,1}, computable by a randomized logspace algorithm R, there exists a deterministic logspace algorithm D (attempting to compute f), such that on every input x of length n, the algorithm D outputs one of the following:1)The correct value f(x).2)The string: “I am unable to compute f(x) because the hardness assumption A is false”, followed by a (provenly correct) circuit of size smaller than 2ϵn′ for membership in L for inputs of length n′, for some n′=Θ(logn); that is, a circuit that refutes A. Moreover, D is explicitly constructed, given R.We note that previous works on the hardness-versus-randomness paradigm give derandomized algorithms that rely blindly on the hardness assumption. If the hardness assumption is false, the algorithms may output incorrect values, and thus a user cannot trust that an output given by the algorithm is correct. Instead, our algorithm D verifies the computation so that it never outputs an incorrect value. Thus, if D outputs a value for f(x), that value is certified to be correct. Moreover, if D does not output a value for f(x), it alerts that the hardness assumption was found to be false, and refutes the assumption.Our next result is a universal derandomizer for BPL (the class of problems solvable by bounded-error randomized logspace algorithms) 1 : We give a deterministic algorithm U that takes as an input a randomized logspace algorithm R and an input x and simulates the computation of R on x, deteriministically. Under the widely believed assumption BPL=L, the space ... 
                        more » 
                        « less   
                    
                            
                            Deep Network With Approximation Error Being Reciprocal of Width to Power of Square Root of Depth
                        
                    
    
            A new network with super-approximation power is introduced. This network is built with Floor (⌊x⌋) or ReLU (max{0,x}) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters N∈N+ and L∈N+, we show that Floor-ReLU networks with width max{d,5N+13} and depth 64dL+3 can uniformly approximate a Hölder function f on [0,1]d with an approximation error 3λdα/2N-αL, where α∈(0,1] and λ are the Hölder order and constant, respectively. More generally for an arbitrary continuous function f on [0,1]d with a modulus of continuity ωf(·), the constructive approximation rate is ωf(dN-L)+2ωf(d)N-L. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of ωf(r) as r→0 is moderate (e.g., ωf(r)≲rα for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially d times a function of N and L independent of d within the modulus of continuity. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1945029
- PAR ID:
- 10230771
- Date Published:
- Journal Name:
- Neural Computation
- Volume:
- 33
- Issue:
- 4
- ISSN:
- 0899-7667
- Page Range / eLocation ID:
- 1005 to 1036
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Meka, Raghu (Ed.)In the d-dimensional turnstile streaming model, a frequency vector 𝐱 = (𝐱(1),…,𝐱(n)) ∈ (ℝ^d)ⁿ is updated entry-wisely over a stream. We consider the problem of f-moment estimation for which one wants to estimate f(𝐱)=∑_{v ∈ [n]}f(𝐱(v)) with a small-space sketch. A function f is tractable if the f-moment can be estimated to within a constant factor using polylog(n) space. The f-moment estimation problem has been intensively studied in the d = 1 case. Flajolet and Martin estimate the F₀-moment (f(x) = 1 (x > 0), incremental stream); Alon, Matias, and Szegedy estimate the L₂-moment (f(x) = x²); Indyk estimates the L_α-moment (f(x) = |x|^α), α ∈ (0,2]. For d ≥ 2, Ganguly, Bansal, and Dube estimate the L_{p,q} hybrid moment (f:ℝ^d → ℝ,f(x) = (∑_{j = 1}^d |x_j|^p)^q), p ∈ (0,2],q ∈ (0,1). For tractability, Bar-Yossef, Jayram, Kumar, and Sivakumar show that f(x) = |x|^α is not tractable for α > 2. Braverman, Chestnut, Woodruff, and Yang characterize the class of tractable one-variable functions except for a class of nearly periodic functions. In this work we present a simple and generic scheme to construct sketches with the novel idea of hashing indices to Lévy processes, from which one can estimate the f-moment f(𝐱) where f is the characteristic exponent of the Lévy process. The fundamental Lévy-Khintchine representation theorem completely characterizes the space of all possible characteristic exponents, which in turn characterizes the set of f-moments that can be estimated by this generic scheme. The new scheme has strong explanatory power. It unifies the construction of many existing sketches (F₀, L₀, L₂, L_α, L_{p,q}, etc.) and it implies the tractability of many nearly periodic functions that were previously unclassified. Furthermore, the scheme can be conveniently generalized to multidimensional cases (d ≥ 2) by considering multidimensional Lévy processes and can be further generalized to estimate heterogeneous moments by projecting different indices with different Lévy processes. We conjecture that the set of tractable functions can be characterized using the Lévy-Khintchine representation theorem via what we called the Fourier-Hahn-Lévy method.more » « less
- 
            null (Ed.)Abstract Let $$u_{k}$$ u k be a solution of the Helmholtz equation with the wave number k , $$\varDelta u_{k}+k^{2} u_{k}=0$$ Δ u k + k 2 u k = 0 , on (a small ball in) either $${\mathbb {R}}^{n}$$ R n , $${\mathbb {S}}^{n}$$ S n , or $${\mathbb {H}}^{n}$$ H n . For a fixed point p , we define $$M_{u_{k}}(r)=\max _{d(x,p)\le r}|u_{k}(x)|.$$ M u k ( r ) = max d ( x , p ) ≤ r | u k ( x ) | . The following three ball inequality $$M_{u_{k}}(2r)\le C(k,r,\alpha )M_{u_{k}}(r)^{\alpha }M_{u_{k}}(4r)^{1-\alpha }$$ M u k ( 2 r ) ≤ C ( k , r , α ) M u k ( r ) α M u k ( 4 r ) 1 - α is well known, it holds for some $$\alpha \in (0,1)$$ α ∈ ( 0 , 1 ) and $$C(k,r,\alpha )>0$$ C ( k , r , α ) > 0 independent of $$u_{k}$$ u k . We show that the constant $$C(k,r,\alpha )$$ C ( k , r , α ) grows exponentially in k (when r is fixed and small). We also compare our result with the increased stability for solutions of the Cauchy problem for the Helmholtz equation on Riemannian manifolds.more » « less
- 
            Abstract Two-sample tests are important areas aiming to determine whether two collections of observations follow the same distribution or not. We propose two-sample tests based on integral probability metric (IPM) for high-dimensional samples supported on a low-dimensional manifold. We characterize the properties of proposed tests with respect to the number of samples $$n$$ and the structure of the manifold with intrinsic dimension $$d$$. When an atlas is given, we propose a two-step test to identify the difference between general distributions, which achieves the type-II risk in the order of $$n^{-1/\max \{d,2\}}$$. When an atlas is not given, we propose Hölder IPM test that applies for data distributions with $$(s,\beta )$$-Hölder densities, which achieves the type-II risk in the order of $$n^{-(s+\beta )/d}$$. To mitigate the heavy computation burden of evaluating the Hölder IPM, we approximate the Hölder function class using neural networks. Based on the approximation theory of neural networks, we show that the neural network IPM test has the type-II risk in the order of $$n^{-(s+\beta )/d}$$, which is in the same order of the type-II risk as the Hölder IPM test. Our proposed tests are adaptive to low-dimensional geometric structure because their performance crucially depends on the intrinsic dimension instead of the data dimension.more » « less
- 
            Oh, A; Naumann, T; Globerson, A; Saenko, K; Hardt, M; Levine, S (Ed.)We consider the problem of learning a single-index target function f∗ : Rd → R under the spiked covariance data: f∗(x) = σ∗ √ 1 1+θ ⟨x,μ⟩ , x ∼ N(0, Id + θμμ⊤), θ ≍ dβ for β ∈ [0, 1), where the link function σ∗ : R → R is a degree-p polynomial with information exponent k (defined as the lowest degree in the Hermite expansion of σ∗), and it depends on the projection of input x onto the spike (signal) direction μ ∈ Rd. In the proportional asymptotic limit where the number of training examples n and the dimensionality d jointly diverge: n, d → ∞, n/d → ψ ∈ (0,∞), we ask the following question: how large should the spike magnitude θ be, in order for (i) kernel methods, (ii) neural networks optimized by gradient descent, to learn f∗? We show that for kernel ridge regression, β ≥ 1 − 1 p is both sufficient and necessary. Whereas for two-layer neural networks trained with gradient descent, β > 1 − 1 k suffices. Our results demonstrate that both kernel methods and neural networks benefit from low-dimensional structures in the data. Further, since k ≤ p by definition, neural networks can adapt to such structures more effectively.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    