Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
est-time training (TTT) methods explicitly update the weights of a model to adapt to the specific test instance, and they have found success in a variety of settings, including most recently language modeling and reasoning. To demystify this success, we investigate a gradient-based TTT algorithm for in-context learning, where we train a transformer model on the in-context demonstrations provided in the test prompt. Specifically, we provide a comprehensive theoretical characterization of linear transformers when the update rule is a single gradient step. Our theory (i) delineates the role of alignment between pretraining distribution and target task, (ii) demystifies how TTT can alleviate distribution shift, and (iii) quantifies the sample complexity of TTT including how it can significantly reduce the eventual sample size required for in-context learning. As our empirical contribution, we study the benefits of TTT for TabPFN, a tabular foundation model. In line with our theory, we demonstrate that TTT significantly reduces the required sample size for tabular classification (3 to 5 times fewer) unlocking substantial inference efficiency with a negligible training cost.more » « lessFree, publicly-accessible full text available July 31, 2026
-
Free, publicly-accessible full text available July 15, 2026
-
Free, publicly-accessible full text available May 15, 2026
-
Autoencoders are a popular model in many branches of machine learning and lossy data com- pression. However, their fundamental limits, the performance of gradient methods and the features learnt during optimization remain poorly under- stood, even in the two-layer setting. In fact, earlier work has considered either linear autoencoders or specific training regimes (leading to vanish- ing or diverging compression rates). Our paper addresses this gap by focusing on non-linear two- layer autoencoders trained in the challenging pro- portional regime in which the input dimension scales linearly with the size of the representation. Our results characterize the minimizers of the pop- ulation risk, and show that such minimizers are achieved by gradient methods; their structure is also unveiled, thus leading to a concise descrip- tion of the features obtained via training. For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shal- low) autoencoders. Finally, while the results are proved for Gaussian data, numerical simulations on standard datasets display the universality of the theoretical predictions.more » « less
An official website of the United States government

Full Text Available