skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2217069

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In pretraining data detection, the goal is to detect whether a given sentence is in the dataset used for training a Large Language Model LLM). Recent methods (such as Min-K % and Min-K%++) reveal that most training corpora are likely contaminated with both sensitive content and evaluation benchmarks, leading to inflated test set performance. These methods sometimes fail to detect samples from the pretraining data, primarily because they depend on statistics composed of causal token likelihoods. We introduce Infilling Score, a new test-statistic based on non-causal token likelihoods. Infilling Score can be computed for autoregressive models without re-training using Bayes rule. A naive application of Bayes rule scales linearly with the vocabulary size. However, we propose a ratio test-statistic whose computation is invariant to vocabulary size. Empirically, our method achieves a significant accuracy gain over state-of-the-art methods including Min-K%, and Min-K%++ on the WikiMIA benchmark across seven models with different parameter sizes. Further, we achieve higher AUC compared to reference-free methods on the challenging MIMIR benchmark. Finally, we create a benchmark dataset consisting of recent data sources published after the release of Llama-3; this benchmark provides a statistical baseline to indicate potential corpora used for Llama-3 training. 
    more » « less
    Free, publicly-accessible full text available March 26, 2026
  2. A financial network is a web of contracts between firms. Each firm wants the best possible contracts. However, a contract between two firms requires the cooperation of both firms. This contest between cooperation and competition is studied in “Incentive-Aware Models of Financial Networks” by Akhil Jalan, Deepayan Chakrabarti, and Purnamrita Sarkar. They show how contract negotiations lead to a stable network where no firm wants to change contract sizes. In this network, the size of any contract depends on the beliefs of all firms, not just the contract’s two parties. Minor news about one firm can affect these beliefs, causing drastic changes in the network. Moreover, under realistic settings, a regulator cannot trace the source of such changes. This research illustrates the importance of firms’ beliefs and their implications for network stability. The insights could inform regulatory strategies and financial risk management. 
    more » « less
  3. We study the design of embeddings into Euclidean space with outliers. Given a metric space (X, d) and aninteger k, the goal is to embed all but k points in X (called the “outliers”) into ℓ2 with the smallest possibledistortion c. Finding the optimal distortion c for a given outlier set size k, or alternately the smallest k fora given target distortion c are both NP-hard problems. In fact, it is UGC-hard to approximate k to withina factor smaller than 2 even when the metric sans outliers is isometrically embeddable into ℓ2. We considerbi-criteria approximations. Our main result is a polynomial time algorithm that approximates the outlier setsize to within an O(log2 k) factor and the distortion to within a constant factor.The main technical component in our result is an approach for constructing Lipschitz extensions ofembeddings into Banach spaces (such as ℓp spaces). We consider a stronger version of Lipschitz extensionthat we call a nested composition of embeddings: given a low distortion embedding of a subset S of the metricspace X, our goal is to extend this embedding to all of X such that the distortion over S is preserved, whereasthe distortion over the remaining pairs of points in X is bounded by a function of the size of X \ S. Priorwork on Lipschitz extension considers settings where the size of X is potentially much larger than that of Sand the expansion bounds depend on |S|. In our setting, the set S is nearly all of X and the remaining setX \ S, a.k.a. the outliers, is small. We achieve an expansion bound that is polylogarithmic in |X \ S|. 
    more » « less