skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Improved Bounds on Neural Complexity for Representing Piecewise Linear Functions
A deep neural network using rectified linear units represents a continuous piecewise linear (CPWL) function and vice versa. Recent results in the literature estimated that the number of neurons needed to exactly represent any CPWL function grows exponentially with the number of pieces or exponentially in terms of the factorial of the number of distinct linear components. Moreover, such growth is amplified linearly with the input dimension. These existing results seem to indicate that the cost of representing a CPWL function is expensive. In this paper, we propose much tighter bounds and establish a polynomial time algorithm to find a network satisfying these bounds for any given CPWL function. We prove that the number of hidden neurons required to exactly represent any CPWL function is at most a quadratic function of the number of pieces. In contrast to all previous results, this upper bound is invariant to the input dimension. Besides the number of pieces, we also study the number of distinct linear components in CPWL functions. When such a number is also given, we prove that the quadratic complexity turns into bilinear, which implies a lower neural complexity because the number of distinct linear components is always not greater than the minimum number of pieces in a CPWL function. When the number of pieces is unknown, we prove that, in terms of the number of distinct linear components, the neural complexities of any CPWL function are at most polynomial growth for low-dimensional inputs and factorial growth for the worst-case scenario, which are significantly better than existing results in the literature.  more » « less
Award ID(s):
2124929 2225617
PAR ID:
10417317
Author(s) / Creator(s):
; ;
Editor(s):
S. Koyejo; S. Mohamed; A. Agarwal; D. Belgrave; K. Cho; A. Oh
Date Published:
Journal Name:
Advances in neural information processing systems
Volume:
35
ISSN:
1049-5258
Page Range / eLocation ID:
7167--7180
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Recent work shows that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman (WL) graph test. In particular, they show that the WL test can be simulated by GNNs. However, those simulations involve neural networks for the “combine” function of size polynomial or even exponential in the number of graph nodes n, as well as feature vectors of length linear in n. We present an improved simulation of the WL test on GNNs with exponentially lower complexity. In particular, the neural network implementing the combine function in each node has only polylog(n) parameters, and the feature vectors exchanged by the nodes of GNN consists of only O(log n) bits. We also give logarithmic lower bounds for the feature vector length and the size of the neural networks, showing the (near)-optimality of our construction. 
    more » « less
  2. Abstract We consider the problem of computing the partition function $$\sum _x e^{f(x)}$$ , where $$f: \{-1, 1\}^n \longrightarrow {\mathbb R}$$ is a quadratic or cubic polynomial on the Boolean cube $$\{-1, 1\}^n$$ . In the case of a quadratic polynomial f , we show that the partition function can be approximated within relative error $$0 < \epsilon < 1$$ in quasi-polynomial $$n^{O(\ln n - \ln \epsilon )}$$ time if the Lipschitz constant of the non-linear part of f with respect to the $$\ell ^1$$ metric on the Boolean cube does not exceed $$1-\delta $$ , for any $$\delta>0$$ , fixed in advance. For a cubic polynomial f , we get the same result under a somewhat stronger condition. We apply the method of polynomial interpolation, for which we prove that $$\sum _x e^{\tilde {f}(x)} \ne 0$$ for complex-valued polynomials $$\tilde {f}$$ in a neighborhood of a real-valued f satisfying the above mentioned conditions. The bounds are asymptotically optimal. Results on the zero-free region are interpreted as the absence of a phase transition in the Lee–Yang sense in the corresponding Ising model. The novel feature of the bounds is that they control the total interaction of each vertex but not every single interaction of sets of vertices. 
    more » « less
  3. The power of DNN has been successfully demonstrated on a wide variety of high-dimensional problems that cannot be solved by conventional control design methods. These successes also uncover some fundamental and pressing challenges in understanding the representability of deep neural networks for complex and high dimensional input–output relations. Towards the goal of understanding these fundamental questions, we applied an algebraic framework developed in our previous work to analyze ReLU neural network approximation of compositional functions. We prove that for Lipschitz continuous functions, ReLU neural networks have an approximation error upper bound that is a polynomial of the network’s complexity and the compositional features. If the compositional features do not increase exponentially with dimension, which is the case in many applications, the complexity of DNN has a polynomial growth. In addition to function approximations, we also establish ReLU network approximation results for the trajectories of control systems, and for a Lyapunov function that characterizes the domain of attraction. 
    more » « less
  4. We prove that a polynomial fraction of the set of $$k$$-component forests in the $$m \times n$$ grid graph have equal numbers of vertices in each component, for any constant $$k$$. This resolves a conjecture of Charikar, Liu, Liu, and Vuong, and establishes the first provably polynomial-time algorithm for (exactly or approximately) sampling balanced grid graph partitions according to the spanning tree distribution, which weights each $$k$$-partition according to the product, across its $$k$$ pieces, of the number of spanning trees of each piece. Our result follows from a careful analysis of the probability a uniformly random spanning tree of the grid can be cut into balanced pieces. Beyond grids, we show that for a broad family of lattice-like graphs, we achieve balance up to any multiplicative $$(1 \pm \varepsilon)$$ constant with constant probability. More generally, we show that, with constant probability, components derived from uniform spanning trees can approximate any given partition of a planar region specified by Jordan curves. This implies polynomial-time algorithms for sampling approximately balanced tree-weighted partitions for lattice-like graphs. Our results have applications to understanding political districtings, where there is an underlying graph of indivisible geographic units that must be partitioned into $$k$$ population-balanced connected subgraphs. In this setting, tree-weighted partitions have interesting geometric properties, and this has stimulated significant effort to develop methods to sample them. 
    more » « less
  5. Santhanam, Rahul (Ed.)
    The following question arises naturally in the study of graph streaming algorithms: Is there any graph problem which is "not too hard", in that it can be solved efficiently with total communication (nearly) linear in the number n of vertices, and for which, nonetheless, any streaming algorithm with Õ(n) space (i.e., a semi-streaming algorithm) needs a polynomial n^Ω(1) number of passes? Assadi, Chen, and Khanna [STOC 2019] were the first to prove that this is indeed the case. However, the lower bounds that they obtained are for rather non-standard graph problems. Our first main contribution is to present the first polynomial-pass lower bounds for natural "not too hard" graph problems studied previously in the streaming model: k-cores and degeneracy. We devise a novel communication protocol for both problems with near-linear communication, thus showing that k-cores and degeneracy are natural examples of "not too hard" problems. Indeed, previous work have developed single-pass semi-streaming algorithms for approximating these problems. In contrast, we prove that any semi-streaming algorithm for exactly solving these problems requires (almost) Ω(n^{1/3}) passes. The lower bound follows by a reduction from a generalization of the hidden pointer chasing (HPC) problem of Assadi, Chen, and Khanna, which is also the basis of their earlier semi-streaming lower bounds. Our second main contribution is improved round-communication lower bounds for the underlying communication problems at the basis of these reductions: - We improve the previous lower bound of Assadi, Chen, and Khanna for HPC to achieve optimal bounds for this problem. - We further observe that all current reductions from HPC can also work with a generalized version of this problem that we call MultiHPC, and prove an even stronger and optimal lower bound for this generalization. These two results collectively allow us to improve the resulting pass lower bounds for semi-streaming algorithms by a polynomial factor, namely, from n^{1/5} to n^{1/3} passes. 
    more » « less