Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available July 1, 2025
-
The maximal coding rate reduction (MCR2) objective for learning structured and compact deep representations is drawing increasing attention, especially after its recent usage in the derivation of fully explainable and highly effective deep network architectures. However, it lacks a complete theoretical justification: only the properties of its global optima are known, and its global landscape has not been studied. In this work, we give a complete characterization of the properties of all its local and global optima, as well as other types of critical points. Specifically, we show that each (local or global) maximizer of the MCR2 problem corresponds to a low-dimensional, discriminative, and diverse representation, and furthermore, each critical point of the objective is either a local maximizer or a strict saddle point. Such a favorable landscape makes MCR2 a natural choice of objective for learning diverse and discriminative representations via first-order optimization methods. To validate our theoretical findings, we conduct extensive experiments on both synthetic and real data sets.more » « lessFree, publicly-accessible full text available June 25, 2025
-
Free, publicly-accessible full text available June 9, 2025
-
Free, publicly-accessible full text available June 1, 2025
-
Free, publicly-accessible full text available June 1, 2025
-
Free, publicly-accessible full text available June 1, 2025
-
Free, publicly-accessible full text available June 1, 2025
-
Free, publicly-accessible full text available January 1, 2025
-
An extensively studied phenomenon of the past few years in training deep networks is the implicit bias of gradient descent towards parsimonious solutions. In this work, we further investigate this phenomenon by narrowing our focus to deep matrix factorization, where we reveal surprising low-dimensional structures in the learning dynamics when the target matrix is low-rank. Specifically, we show that the evolution of gradient descent starting from arbitrary orthogonal initialization only affects a minimal portion of singular vector spaces across all weight matrices. In other words, the learning process happens only within a small invariant subspace of each weight matrix, despite the fact that all parameters are updated throughout training. From this, we provide rigorous justification for low-rank training in a specific, yet practical setting. In particular, we demonstrate that we can construct compressed factorizations that are equivalent to full-width, deep factorizations throughout training for solving low-rank matrix completion problems efficiently.more » « lessFree, publicly-accessible full text available November 6, 2024
-
Toeplitz operators are fundamental and ubiquitous in signal processing and information theory as models for linear, time-invariant (LTI) systems. Due to the fact that any practical system can access only signals of finite duration, time-limited restrictions of Toeplitz operators are naturally of interest. To provide a unifying treatment of such systems working on different signal domains, we consider time-limited Toeplitz operators on locally compact abelian groups with the aid of the Fourier transform on these groups. In particular, we survey existing results concerning the relationship between the spectrum of a time-limited Toeplitz operator and the spectrum of the corresponding non-time-limited Toeplitz operator. We also develop new results specifically concerning the eigenvalues of time-frequency limiting operators on locally compact abelian groups. Applications of our unifying treatment are discussed in relation to channel capacity and in relation to representation and approximation of signals.more » « less