null
(Ed.)
A major factor in the success of deep neural networks is the use of sophisticated architectures rather
than the classical multilayer perceptron (MLP). Residual networks (ResNets) stand out among these
powerful modern architectures. Previous works focused on the optimization advantages of deep
ResNets over deep MLPs. In this paper, we show another distinction between the two models,
namely, a tendency of ResNets to promote smoother interpolations than MLPs. We analyze this
phenomenon via the neural tangent kernel (NTK) approach. First, we compute the NTK for a
considered ResNet model and prove its stability during gradient descent training. Then, we show
by various evaluation methodologies that for ReLU activations the NTK of ResNet, and its kernel
regression results, are smoother than the ones of MLP. The better smoothness observed in our
analysis may explain the better generalization ability of ResNets and the practice of moderately
attenuating the residual blocks.
more »
« less