Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds. The network size scales exponentially in the approximation error, with an exponent depending on the intrinsic dimension of the data and the smoothness of the function. Our result shows that exploiting low dimensional data structures can greatly enhance the efficiency in function approximation by neural networks. We also implement a subnetwork that assigns input data to their corresponding local neighborhoods, which may be of independent interest.
more »
« less
Efficient approximation of deep ReLU networks for functions on low dimensional manifolds
Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds. The network size scales exponentially in the approximation error, with an exponent depending on the intrinsic dimension of the data and the smoothness of the function. Our result shows that exploiting low dimensional data structures can greatly enhance the efficiency in function approximation by neural networks. We also implement a subnetwork that assigns input data to their corresponding local neighborhoods, which may be of independent interest.
more »
« less
 Award ID(s):
 2012652
 NSFPAR ID:
 10351574
 Date Published:
 Journal Name:
 Conference on Neural Information Processing Systems
 Volume:
 32
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


Deep neural networks have revolutionized many real world applications, due to their flexibility in data fitting and accurate predictions for unseen data. A line of research reveals that neural networks can approximate certain classes of functions with an arbitrary accuracy, while the size of the network scales exponentially with respect to the data dimension. Empirical results, however, suggest that networks of moderate size already yield appealing performance. To explain such a gap, a common belief is that many data sets exhibit low dimensional structures, and can be modeled as samples near a low dimensional manifold. In this paper, we prove that neural networks can efficiently approximate functions supported on low dimensional manifolds. The network size scales exponentially in the approximation error, with an exponent depending on the intrinsic dimension of the data and the smoothness of the function. Our result shows that exploiting low dimensional data structures can greatly enhance the efficiency in function approximation by neural networks. We also implement a subnetwork that assigns input data to their corresponding local neighborhoods, which may be of independent interest.more » « less

null (Ed.)Most of existing statistical theories on deep neural networks have sample complexities cursed by the data dimension and therefore cannot well explain the empirical success of deep learning on highdimensional data. To bridge this gap, we propose to exploit the lowdimensional structures of the real world datasets and establish theoretical guarantees of convolutional residual networks (ConvResNet) in terms of function approximation and statistical recovery for binary classification problem. Specifically, given the data lying on a 𝑑dimensional manifold isometrically embedded in ℝ^𝐷, we prove that if the network architecture is properly chosen, ConvResNets can (1) approximate Besov functions on manifolds with arbitrary accuracy, and (2) learn a classifier by minimizing the empirical logistic risk, which gives an excess risk in the order of 𝑛−2s/(2s+d), where 𝑠 is a smoothness parameter. This implies that the sample complexity depends on the intrinsic dimension 𝑑, instead of the data dimension 𝐷. Our results demonstrate that ConvResNets are adaptive to lowdimensional structures of data sets.more » « less

The power of DNN has been successfully demonstrated on a wide variety of highdimensional problems that cannot be solved by conventional control design methods. These successes also uncover some fundamental and pressing challenges in understanding the representability of deep neural networks for complex and high dimensional input–output relations. Towards the goal of understanding these fundamental questions, we applied an algebraic framework developed in our previous work to analyze ReLU neural network approximation of compositional functions. We prove that for Lipschitz continuous functions, ReLU neural networks have an approximation error upper bound that is a polynomial of the network’s complexity and the compositional features. If the compositional features do not increase exponentially with dimension, which is the case in many applications, the complexity of DNN has a polynomial growth. In addition to function approximations, we also establish ReLU network approximation results for the trajectories of control systems, and for a Lyapunov function that characterizes the domain of attraction.more » « less

Overparameterized neural networks enjoy great representation power on complex data, and more importantly yield sufficiently smooth output, which is crucial to their generalization and robustness. Most existing function approximation theories suggest that with sufficiently many parameters, neural networks can well approximate certain classes of functions in terms of the function value. The neural network themselves, however, can be highly nonsmooth. To bridge this gap, we take convolutional residual networks (ConvResNets) as an example, and prove that large ConvResNets can not only approximate a target function in terms of function value, but also exhibit sufficient firstorder smoothness. Moreover, we extend our theory to approximating functions supported on a lowdimensional manifold. Our theory partially justifies the benefits of using deep and wide networks in practice. Numerical experiments on adversarial robust image classification are provided to support our theory.more » « less