Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Motivated by the desire to understand stochastic algorithms for nonconvex optimization that are robust to their hyperparameter choices, we analyze a mini-batched prox-linear iterative algorithm for the canonical problem of recovering an unknown rank-1 matrix from rank-1 Gaussian measurements corrupted by noise. We derive a deterministic recursion that predicts the error of this method and show, using a non-asymptotic framework, that this prediction is accurate for any batch-size and a large range of step-sizes. In particular, our analysis reveals that this method, though stochastic, converges linearly from a local initialization with a fixed step-size to a statistical error floor. Our analysis also exposes how the batch-size, step-size, and noise level affect the (linear) convergence rate and the eventual statistical estimation error, and we demonstrate how to use our deterministic predictions to perform hyperparameter tuning (e.g. step-size and batch-size selection) without ever running the method. On a technical level, our analysis is enabled in part by showing that the fluctuations of the empirical iterates around our deterministic predictions scale with the error of the previous iterate.more » « less
-
ABSTRACT Detection of a planted dense subgraph in a random graph is a fundamental statistical and computational problem that has been extensively studied in recent years. We study a hypergraph version of the problem. Let denote the ‐uniform Erdős–Rényi hypergraph model with vertices and edge density . We consider detecting the presence of a planted subhypergraph in a hypergraph, where and . Focusing on tests that are degree‐ polynomials of the entries of the adjacency tensor, we determine the threshold between the easy and hard regimes for the detection problem. More precisely, for , the threshold is given by , and for , the threshold is given by . Our results are already new in the graph case , as we consider the subtlelog‐density regimewhere hardness based on average‐case reductions is not known. Our proof of low‐degree hardness is based on aconditionalvariant of the standard low‐degree likelihood calculation.more » « less
-
Abstract We consider the problem of estimating the factors of a rank-$$1$$ matrix with i.i.d. Gaussian, rank-$$1$$ measurements that are nonlinearly transformed and corrupted by noise. Considering two prototypical choices for the nonlinearity, we study the convergence properties of a natural alternating update rule for this non-convex optimization problem starting from a random initialization. We show sharp convergence guarantees for a sample-split version of the algorithm by deriving a deterministic one-step recursion that is accurate even in high-dimensional problems. Notably, while the infinite-sample population update is uninformative and suggests exact recovery in a single step, the algorithm—and our deterministic one-step prediction—converges geometrically fast from a random initialization. Our sharp, non-asymptotic analysis also exposes several other fine-grained properties of this problem, including how the nonlinearity and noise level affect convergence behaviour. On a technical level, our results are enabled by showing that the empirical error recursion can be predicted by our deterministic one-step updates within fluctuations of the order $$n^{-1/2}$$ when each iteration is run with $$n$$ observations. Our technique leverages leave-one-out tools originating in the literature on high-dimensional $$M$$-estimation and provides an avenue for sharply analyzing complex iterative algorithms from a random initialization in other high-dimensional optimization problems with random data.more » « less
-
Free, publicly-accessible full text available October 1, 2026
-
Free, publicly-accessible full text available September 1, 2026
-
Free, publicly-accessible full text available July 10, 2026
-
Free, publicly-accessible full text available May 30, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Network alignment or graph matching—figuring out how vertices across different networks correspond to each other—is a key challenge in many fields, from protecting online privacy to mapping biological data, improving computer vision, and even understanding languages. However, this problem falls into the class of notoriously difficult quadratic assignment problems, which are NP-hard to solve or approximate. Despite these challenges, researchers Mao, Wu, Xu, and Yu have made a major breakthrough. In their paper, “Random Graph Matching at Otter’s Threshold via Counting Chandeliers,” they introduce an innovative algorithm that can successfully match two random networks whenever the square of their edge correlation exceeds Otter’s constant (≈0.338). Their key innovation lies in counting chandeliers—specially designed tree-like structures—to identify corresponding vertices across the networks. The algorithm correctly matches nearly all vertices with high probability and even achieves perfect matching whenever the data allows. This is the first-ever polynomial-time algorithm capable of achieving perfect and near-perfect matching with an explicit constant correlation for both dense and sparse networks, bridging a long-standing gap between statistical limits and algorithmic performance.more » « lessFree, publicly-accessible full text available April 18, 2026
-
Free, publicly-accessible full text available February 27, 2026
An official website of the United States government
