Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to nonfederal websites. Their policies may differ from this site.

null (Ed.)In the lowrank matrix completion (LRMC) problem, the lowrank assumption means that the columns (or rows) of the matrix to be completed are points on a lowdimensional linear algebraic variety. This paper extends this thinking to cases where the columns are points on a lowdimensional nonlinear algebraic variety, a problem we call Low Algebraic Dimension Matrix Completion (LADMC). Matrices whose columns belong to a union of subspaces are an important special case. We propose a LADMC algorithm that leverages existing LRMC methods on a tensorized representation of the data. For example, a secondorder tensorized representation is formed by taking the Kronecker product of each column with itself, and we consider higher order tensorizations as well. This approach will succeed in many cases where traditional LRMC is guaranteed to fail because the data are lowrank in the tensorized representation but not in the original representation. We also provide a formal mathematical justication for the success of our method. In particular, we give bounds of the rank of these data in the tensorized representation, and we prove sampling requirements to guarantee uniqueness of the solution. We also provide experimental results showing that the new approach outperforms existing stateoftheart methods for matrix completion under a union of subspaces model.more » « less

We give a tight characterization of the (vectorized Euclidean) norm of weights required to realize a function f : R^d > R as a single hiddenlayer ReLU network with an unbounded number of units (infinite width), extending the univariate characterization of Savarese et al. (2019) to the multivariate case.more » « less

Recent advances have illustrated that it is often possible to learn to solve linear inverse problems in imaging using training data that can outperform more traditional regularized leastsquares solutions. Along these lines, we present some extensions of the Neumann network, a recently introduced endtoend learned architecture inspired by a truncated Neumann series expansion of the solution map to a regularized leastsquares problem. Here we summarize the Neumann network approach and show that it has a form compatible with the optimal reconstruction function for a given inverse problem. We also investigate an extension of the Neumann network that incorporates a more sample efficient patchbased regularization approach.more » « less

Many modern approaches to image reconstruction are based on learning a regularizer that implicitly encodes a prior over the space of images. For largescale images common in imaging domains like remote sensing, medical imaging, astronomy, and others, learning the entire image prior requires an oftenimpractical amount of training data. This work describes a deep image patchbased regularization approach that can be incorporated into a variety of modern algorithms. Learning a regularizer amounts to learning the a prior for image patches, greatly reducing the dimension of the space to be learned and hence the sample complexity. Demonstrations in a remote sensing application illustrates that learning patchbased regularizers produces highquality reconstructions and even permits learning from a single groundtruth image.more » « less

We give a tight characterization of the (vectorized Euclidean) norm of weights required to realize a function f : Rd → R as a single hiddenlayer ReLU network with an unbounded number of units (infinite width), extending the univariate characterization of Savarese et al. (2019) to the multivariate case.more » « less