%AHeckel, Reinhard%ASoltanolkotabi, Mahdi%D2020%I
%K
%MOSTI ID: 10132891
%PMedium: X
%TDenoising and Regularization via Exploiting the Structural Bias of Convolutional Generators.
R. Heckel and M. Soltanolkotabi
%XConvolutional Neural Networks (CNNs) have emerged as highly successful tools for image
generation, recovery, and restoration. This success is often attributed to large amounts of
training data. However, recent experimental findings challenge this view and instead suggest
that a major contributing factor to this success is that convolutional networks impose strong
prior assumptions about natural images. A surprising experiment that highlights this architectural bias towards natural images is that one can remove noise and corruptions from a natural image without using any training data, by simply fitting (via gradient descent) a randomly initialized, over-parameterized convolutional generator to the single corrupted image. While this over-parameterized network can fit the corrupted image perfectly, surprisingly after a few iterations of gradient descent one obtains the uncorrupted image. This intriguing phenomena
enables state-of-the-art CNN-based denoising and regularization of linear inverse problems such as compressive sensing. In this paper we take a step towards demystifying this experimental phenomena by attributing this effect to particular architectural choices of convolutional networks, namely convolutions with fixed interpolating filters. We then formally characterize the dynamics of fitting a two layer convolutional generator to a noisy signal and prove that earlystopped gradient descent denoises/regularizes. This results relies on showing that convolutional generators fit the structured part of an image significantly faster than the corrupted portion
Country unknown/Code not availableOSTI-MSA