Significant theoretical work has established that in specific regimes, neural networks trained by gradient descent behave like kernel methods. However, in practice, it is known that neural networks strongly outperform their associated kernels. In this work, we explain this gap by demonstrating that there is a large class of functions which cannot be efficiently learned by kernel methods but can be easily learned with gradient descent on a two layer neural network outside the kernel regime by learning representations that are relevant to the target task. We also demonstrate that these representations allow for efficient transfer learning, which is impossible in the kernel regime. Specifically, we consider the problem of learning polynomials which depend on only a few relevant directions, i.e. of the form $f(x)=g(Ux)$ where $U: \R^d \to \R^r$ with $d≫r$. When the degree of f⋆ is p, it is known that n≍dp samples are necessary to learn f⋆ in the kernel regime. Our primary result is that gradient descent learns a representation of the data which depends only on the directions relevant to f. This results in an improved sample complexity of n≍d2r+drp. Furthermore, in a transfer learning setup where the data distributions in the source and targetmore »
Neural Networks can Learn Representations with Gradient Descent
Significant theoretical work has established that in specific regimes, neural networks trained by gradient descent behave like kernel methods. However, in practice, it is known that neural networks strongly outperform their associated kernels. In this work, we explain this gap by demonstrating that there is a large class of functions which cannot be efficiently learned by kernel methods but can be easily learned with gradient descent on a two layer neural network outside the kernel regime by learning representations that are relevant to the target task. We also demonstrate that these representations allow for efficient transfer learning, which is impossible in the kernel regime. Specifically, we consider the problem of learning polynomials which depend on only a few relevant directions, i.e. of the form f⋆(x)=g(Ux) where U:\Rd→\Rr with d≫r. When the degree of f⋆ is p, it is known that n≍dp samples are necessary to learn f⋆ in the kernel regime. Our primary result is that gradient descent learns a representation of the data which depends only on the directions relevant to f⋆. This results in an improved sample complexity of n≍d2 and enables transfer learning with sample complexity independent of d.
 Editors:
 Loh, Poling; Raginsky, Maxim
 Award ID(s):
 2002272
 Publication Date:
 NSFPAR ID:
 10356406
 Journal Name:
 Proceedings of Thirty Fifth Conference on Learning Theory
 Sponsoring Org:
 National Science Foundation
More Like this


The adversarial vulnerability of neural nets, and subsequent techniques to create robust models have attracted significant attention; yet we still lack a full understanding of this phenomenon. Here, we study adversarial examples of trained neural networks through analytical tools afforded by recent theory advances connecting neural networks and kernel methods, namely the Neural Tangent Kernel (NTK), following a growing body of work that leverages the NTK approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. We show how NTKs allow to generate adversarial examples in a ``trainingfree'' fashion, and demonstrate that they transfer to fool their finitewidth neural net counterparts in the ``lazy'' regime. We leverage this connection to provide an alternative view on robust and nonrobust features, which have been suggested to underlie the adversarial brittleness of neural nets. Specifically, we define and study features induced by the eigendecomposition of the kernel to better understand the role of robust and nonrobust features, the reliance on both for standard classification and the robustnessaccuracy tradeoff. We find that such features are surprisingly consistent across architectures, and that robust features tend to correspond to the largest eigenvalues of the model, and thus are learned early during training.more »

Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for highperformance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lowerlevel problem) and various hyperparameters such as the configuration of the architecture over the validation data (upperlevel problem). This paper explores the statistical aspects of such problems with trainvalidation splits. In practice, the lowerlevel problem is often overparameterized and can easily achieve zero loss. Thus, apriori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of trainvalidation split. To this aim this work establishes the following results: • We show that refined properties of the validation loss such as risk and hypergradients are indicative of those of the true test loss. This reveals that the upperlevel problem helps select the most generalizable model and prevent overfitting with a nearminimal validation sample size. Importantly, this is established for continuous spaces – which are highly relevant for popular differentiable search schemes. • We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradientdescent, we showmore »

Neural Architecture Search (NAS) is a popular method for automatically designing optimized architectures for highperformance deep learning. In this approach, it is common to use bilevel optimization where one optimizes the model weights over the training data (lowerlevel problem) and various hyperparameters such as the configuration of the architecture over the validation data (upperlevel problem). This paper explores the statistical aspects of such problems with trainvalidation splits. In practice, the lowerlevel problem is often overparameterized and can easily achieve zero loss. Thus, apriori it seems impossible to distinguish the right hyperparameters based on training loss alone which motivates a better understanding of the role of trainvalidation split. To this aim this work establishes the following results: • We show that refined properties of the validation loss such as risk and hypergradients are indicative of those of the true test loss. This reveals that the upperlevel problem helps select the most generalizable model and prevent overfitting with a nearminimal validation sample size. Importantly, this is established for continuous spaces – which are highly relevant for popular differentiable search schemes. • We establish generalization bounds for NAS problems with an emphasis on an activation search problem. When optimized with gradientdescent, we showmore »

Gradient descentbased optimization methods underpin the parameter training of neural networks, and hence comprise a significant component in the impressive test results found in a number of applications. Introducing stochasticity is key to their success in practical problems, and there is some understanding of the role of stochastic gradient descent in this context. Momentum modifications of gradient descent such as Polyak’s Heavy Ball method (HB) and Nesterov’s method of accelerated gradients (NAG), are also widely adopted. In this work our focus is on understanding the role of momentum in the training of neural networks, concentrating on the common situation in which the momentum contribution is fixed at each step of the algorithm. To expose the ideas simply we work in the deterministic setting. Our approach is to derive continuous time approximations of the discrete algorithms; these continuous time approximations provide insights into the mechanisms at play within the discrete algorithms. We prove three such approximations. Firstly we show that standard implementations of fixed momentum methods approximate a timerescaled gradient descent flow, asymptotically as the learning rate shrinks to zero; this result does not distinguish momentum methods from pure gradient descent, in the limit of vanishing learning rate. We then proceedmore »