skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2023166

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Empirical evidence suggests that for a variety of overparameterized nonlinear models, most notably in neural network training, the growth of the loss around a minimizer strongly impacts its performance. Flat minima—those around which the loss grows slowly—appear to generalize well. This work takes a step towards understanding this phenomenon by focusing on the simplest class of overparameterized nonlinear models: those arising in low-rank matrix recovery. We analyse overparameterized matrix and bilinear sensing, robust principal component analysis, covariance matrix estimation and single hidden layer neural networks with quadratic activation functions. In all cases, we show that flat minima, measured by the trace of the Hessian, exactly recover the ground truth under standard statistical assumptions. For matrix completion, we establish weak recovery, although empirical evidence suggests exact recovery holds here as well. We complete the paper with synthetic experiments that illustrate our findings. 
    more » « less
  2. Free, publicly-accessible full text available July 15, 2026
  3. Free, publicly-accessible full text available June 30, 2026
  4. Ozay, N; Balzano, L; Panagou, D; Abate, A (Ed.)
    We consider the problem of learning a realization of a partially observed bilinear dynamical system (BLDS) from noisy input-output data. Given a single trajectory of input-output samples, we provide an algorithm and a finite time analysis for learning the system’s Markov-like parameters, from which a balanced realization of the bilinear system can be obtained. The stability of BLDS depends on the sequence of inputs used to excite the system. Moreover, our identification algorithm regresses the outputs to highly correlated, nonlinear, and heavy-tailed covariates. These properties, unique to partially observed bilinear dynamical systems, pose significant challenges to the analysis of our algorithm for learning the unknown dynamics. We address these challenges and provide high probability error bounds on our identification algorithm under a uniform stability assumption. Our analysis provides insights into system theoretic quantities that affect learning accuracy and sample complexity. Lastly, we perform numerical experiments with synthetic data to reinforce these insights. 
    more » « less
    Free, publicly-accessible full text available May 22, 2026
  5. Jaggi, Martin (Ed.)
    A classical approach for solving discrete time nonlinear control on a nite horizon consists in repeatedly minimizing linear quadratic approximations of the original problem around current candidate solutions. While widely popular in many domains, such an approach has mainly been analyzed locally. We provide detailed convergence guarantees to stationary points as well as local linear convergence rates for the Iterative Linear Quadratic Regulator (ILQR) algorithm and its Di erential Dynamic Programming (DDP) variant. For problems without costs on control variables, we observe that global convergence to minima can be ensured provided that the linearized discrete time dynamics are surjective, costs on the state variables are gradient dominated. We further detail quadratic local convergence when the costs are self-concordant. We show that surjectivity of the linearized dynamics hold for appropriate discretization schemes given the existence of a feedback linearization scheme. We present complexity bounds of algorithms based on linear quadratic approximations through the lens of generalized Gauss-Newton methods. Our analysis uncovers several convergence phases for regularized generalized Gauss-Newton algorithms. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  6. We study the query complexity of finding the set of all Nash equilibria\(\mathcal {X}_\ast \times \mathcal {Y}_\ast \)in two-player zero-sum matrix games. Fearnley and Savani [18] showed that for any randomized algorithm, there exists ann×ninput matrix where it needs to queryΩ(n2) entries in expectation to compute asingleNash equilibrium. On the other hand, Bienstock et al. [5] showed that there is a special class of matrices for which one can queryO(n) entries and compute its set of all Nash equilibria. However, these results do not fully characterize the query complexity of finding the set of all Nash equilibria in two-player zero-sum matrix games. In this work, we characterize the query complexity of finding the set of all Nash equilibria\(\mathcal {X}_\ast \times \mathcal {Y}_\ast \)in terms of the number of rowsnof the input matrix\(A \in \mathbb {R}^{n \times n} \), row support size\(k_1 := |\bigcup \limits _{x \in \mathcal {X}_\ast } \text{supp}(x)| \), and column support size\(k_2 := |\bigcup \limits _{y \in \mathcal {Y}_\ast } \text{supp}(y)| \). We design a simple yet non-trivial randomized algorithm that returns the set of all Nash equilibria\(\mathcal {X}_\ast \times \mathcal {Y}_\ast \)by querying at mostO(nk5· polylog(n)) entries of the input matrix\(A \in \mathbb {R}^{n \times n} \)in expectation, wherek≔ max{k1,k2}. This upper bound is tight up to a factor of poly(k), as we show that for any randomized algorithm, there exists ann×ninput matrix with min {k1,k2} = 1, for which it needs to queryΩ(nk) entries in expectation in order to find the set of all Nash equilibria\(\mathcal {X}_\ast \times \mathcal {Y}_\ast \). 
    more » « less
    Free, publicly-accessible full text available April 25, 2026
  7. Li, Y; Mandt, S; Agrawal, S; Khan, E (Ed.)
    We study the problem of representational transfer in offline Reinforcement Learning (RL), where a learner has access to episodic data from a number of source tasks collected a priori, and aims to learn a shared representation to be used in finding a good policy for a target task. Unlike in online RL where the agent interacts with the environment while learning a policy, in the offline setting there cannot be such interactions in either the source tasks or the target task; thus multi-task offline RL can suffer from incomplete coverage. We propose an algorithm to compute pointwise uncertainty measures for the learnt representation in low-rank MDPs, and establish a data-dependent upper bound for the suboptimality of the learnt policy for the target task. Our algorithm leverages the collective exploration done by source tasks to mitigate poor coverage at some points by a few tasks, thus overcoming the limitation of needing uniformly good coverage for a meaningful transfer by existing offline algorithms. We complement our theoretical results with empirical evaluation on a rich-observation MDP which requires many samples for complete coverage. Our findings illustrate the benefits of penalizing and quantifying the uncertainty in the learnt representation. 
    more » « less
    Free, publicly-accessible full text available April 23, 2026
  8. The rise of foundation models fine-tuned on human feedback from potentially untrusted users has increased the risk of adversarial data poisoning, necessitating the study of robustness of learning algorithms against such attacks. Existing research on provable certified robustness against data poisoning attacks primarily focuses on certifying robustness for static adversaries who modify a fraction of the dataset used to train the model before the training algorithm is applied. In practice, particularly when learning from human feedback in an online sense, adversaries can observe and react to the learning process and inject poisoned samples that optimize adversarial objectives better than when they are restricted to poisoning a static dataset once, before the learning algorithm is applied. Indeed, it has been shown in prior work that online dynamic adversaries can be significantly more powerful than static ones. We present a novel framework for computing certified bounds on the impact of dynamic poisoning, and use these certificates to design robust learning algorithms. We give an illustration of the framework for the mean estimation problem and binary classification problems and outline directions for extending this in further work. 
    more » « less
    Free, publicly-accessible full text available April 23, 2026
  9. A fundamental problem in machine learning is to understand how neural networks make accurate predictions, while seemingly bypassing the curse of dimensionality. A possible explanation is that common training algorithms for neural networks implicitly perform dimensionality reduction—a process called feature learning. Recent work [A. Radhakrishnan, D. Beaglehole, P. Pandit, M. Belkin,Science383, 1461–1467 (2024).] posited that the effects of feature learning can be elicited from a classical statistical estimator called the average gradient outer product (AGOP). The authors proposed Recursive Feature Machines (RFMs) as an algorithm that explicitly performs feature learning by alternating between 1) reweighting the feature vectors by the AGOP and 2) learning the prediction function in the transformed space. In this work, we develop theoretical guarantees for how RFM performs dimensionality reduction by focusing on the class of overparameterized problems arising in sparse linear regression and low-rank matrix recovery. Specifically, we show that RFM restricted to linear models (lin-RFM) reduces to a variant of the well-studied Iteratively Reweighted Least Squares (IRLS) algorithm. Furthermore, our results connect feature learning in neural networks and classical sparse recovery algorithms and shed light on how neural networks recover low rank structure from data. In addition, we provide an implementation of lin-RFM that scales to matrices with millions of missing entries. Our implementation is faster than the standard IRLS algorithms since it avoids forming singular value decompositions. It also outperforms deep linear networks for sparse linear regression and low-rank matrix completion. 
    more » « less
    Free, publicly-accessible full text available April 1, 2026
  10. Free, publicly-accessible full text available January 22, 2026