skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2209975

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Domainadaptation(DA)isastatisticallearningproblemthatariseswhenthedistribution ofthesourcedatausedtotrainamodeldi↵ersfromthatofthetargetdatausedtoevaluate themodel. WhilemanyDAalgorithmshavedemonstratedconsiderableempiricalsuccess, blindly applying these algorithms can often lead to worse performance on new datasets. Toaddressthis, itiscrucialtoclarifytheassumptionsunderwhichaDAalgorithmhas good target performance. In this work, we focus on the assumption of the presence of conditionally invariant components (CICs), which are relevant for prediction and remain conditionally invariant across the source and target data. We demonstrate that CICs, whichcanbeestimatedthroughconditionalinvariantpenalty(CIP),playthreeprominent rolesinprovidingtargetriskguaranteesinDA.First,weproposeanewalgorithmbased on CICs, importance-weighted conditional invariant penalty (IW-CIP), which has target riskguaranteesbeyondsimplesettingssuchascovariateshiftandlabelshift. Second,we showthatCICshelpidentifylargediscrepanciesbetweensourceandtargetrisksofother DAalgorithms. Finally,wedemonstratethatincorporatingCICsintothedomaininvariant projection(DIP)algorithmcanaddressitsfailurescenariocausedbylabel-flippingfeatures. We support our new algorithms and theoretical findings via numerical experiments on syntheticdata,MNIST,CelebA,Camelyon17,andDomainNetdatasets. 
    more » « less
    Free, publicly-accessible full text available May 25, 2026
  2. We develop Latent Exploration Score (LES) to mitigate over-exploration in Latent Space Op- timization (LSO), a popular method for solv- ing black-box discrete optimization problems. LSO utilizes continuous optimization within the latent space of a Variational Autoencoder (VAE) and is known to be susceptible to over- exploration, which manifests in unrealistic solu- tions that reduce its practicality. LES leverages the trained decoder’s approximation of the data distribution, and can be employed with any VAE decoder–including pretrained ones–without addi- tional training, architectural changes or access to the training data. Our evaluation across five LSO benchmark tasks and twenty-two VAE mod- els demonstrates that LES always enhances the quality of the solutions while maintaining high objective values, leading to improvements over ex- isting solutions in most cases. We believe that new avenues to LSO will be opened by LES’ ability to identify out of distribution areas, differentiability, and computational tractability. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  3. Large language models (LLMs) have revolution- ized machine learning due to their ability to cap- ture complex interactions between input features. Popular post-hoc explanation methods like SHAP provide marginal feature attributions, while their extensions to interaction importances only scale to small input lengths (≈20). We propose Spectral Ex- plainer (SPEX), a model-agnostic interaction attri- bution algorithm that efficiently scales to large input lengths (≈1000). SPEX exploits underlying nat- ural sparsity among interactions—common in real- world data—and applies a sparse Fourier transform using a channel decoding algorithm to efficiently identify important interactions. We perform exper- iments across three difficult long-context datasets that require LLMs to utilize interactions between inputs to complete the task. For large inputs, SPEX outperforms marginal attribution methods by up to 20% in terms of faithfully reconstructing LLM out- puts. Further, SPEX successfully identifies key fea- tures and interactions that strongly influence model output. For one of our datasets, HotpotQA, SPEX provides interactions that align with human annota- tions. Finally, we use our model-agnostic approach to generate explanations to demonstrate abstract rea- soning in closed-source LLMs (GPT-4o mini) and compositional reasoning in vision-language models. 
    more » « less
    Free, publicly-accessible full text available May 1, 2026
  4. Background:The limited diagnostic accuracy of prostate-specific antigen screening for prostate cancer (PCa) has prompted innovative solutions, such as the state-of-the-art 18-gene urine test for clinically-significant PCa (MyProstateScore2.0 (MPS2)).Objective:We aim to develop a non-invasive biomarker test, the simplified MPS2 (sMPS2), which achieves similar state-of-the-art accuracy as MPS2 for predicting high-grade PCa but requires substantially fewer genes than the 18-gene MPS2 to improve its accessibility for routine clinical care.Methods:We grounded the development of sMPS2 in the Predictability, Computability, and Stability (PCS) framework for veridical data science. Under this framework, we stress-tested the development of sMPS2 across various data preprocessing and modeling choices and developed a stability-driven PCS ranking procedure for selecting the most predictive and robust genes for use in sMPS2.Results:The final sMPS2 model consisted of 7 genes and achieved a 0.784 AUROC (95% confidence interval, 0.742–0.825) for predicting high-grade PCa on a blinded external validation cohort. This is only 2.3% lower than the 18-gene MPS2, which is similar in magnitude to the 1–2% in uncertainty induced by different data preprocessing choices.Conclusions:The 7-gene sMPS2 provides a unique opportunity to expand the reach and adoption of non-invasive PCa screening. 
    more » « less
  5. Concept bottleneck models (CBM) aim to improve model interpretability by predicting human level “concepts” in a bottleneck within a deep learning model architecture. However, how the predicted concepts are used in predicting the target still either remains black-box or is simplified to maintain interpretability at the cost of prediction performance. We propose to use Fast Interpretable Greedy Sum- Trees (FIGS) to obtain Binary Distillation (BD). This new method, called FIGSBD, distills a binary-augmented concept-to-target portion of the CBM into an interpretable tree-based model, while maintaining the competitive prediction performance of the CBM teacher. FIGS-BD can be used in downstream tasks to explain and decompose CBM predictions into interpretable binary-concept-interaction attributions and guide adaptive test-time intervention. Across 4 datasets, we demonstrate that our adaptive test-time intervention identifies key concepts that significantly improve performance for realistic human-in-the-loop settings that only allow for limited concept interventions. All code is made available on Github (https://github.com/mattyshen/adaptiveTTI). 
    more » « less
    Free, publicly-accessible full text available March 5, 2026
  6. Modern machine learning has achieved impressive prediction performance, but often sacrifices interpretability, a critical consideration in high-stakes domains such as medicine. In such settings, practitioners often use highly interpretable decision tree models, but these suffer from inductive bias against additive structure. To overcome this bias, we propose Fast Interpretable Greedy-Tree Sums (FIGS), which generalizes the Classification and Regression Trees (CART) algorithm to simultaneously grow a flexible number of trees in summation. By combining logical rules with addition, FIGS adapts to additive structure while remaining highly interpretable. Experiments on real-world datasets show FIGS achieves state-of-the-art prediction performance. To demonstrate the usefulness of FIGS in high-stakes domains, we adapt FIGS to learn clinical decision instruments (CDIs), which are tools for guiding decision-making. Specifically, we introduce a variant of FIGS known as Group Probability-Weighted Tree Sums (G-FIGS) that accounts for heterogeneity in medical data. G-FIGS derives CDIs that reflect domain knowledge and enjoy improved specificity (by up to 20% over CART) without sacrificing sensitivity or interpretability. Theoretically, we prove that FIGS learns components of additive models, a property we refer to as disentanglement. Further, we show (under oracle conditions) that tree-sum models leverage disentanglement to generalize more efficiently than single tree models when fitted to additive regression functions. Finally, to avoid overfitting with an unconstrained number of splits, we develop Bagging-FIGS, an ensemble version of FIGS that borrows the variance reduction techniques of random forests. Bagging-FIGS performs competitively with random forests and XGBoost on real-world datasets. 
    more » « less
    Free, publicly-accessible full text available February 18, 2026
  7. In this paper, we study the role of initialization in Low Rank Adaptation (LoRA) as originally introduced in Hu et al. [19]. Essentially, to start from the pretrained model as initialization for finetuning, one can either initialize B to zero and A to random (default initialization in PEFT package), or vice-versa. In both cases, the product BA is equal to zero at initialization, which makes finetuning starts from the pretrained model. These two initialization schemes are seemingly sim- ilar. They should in-principle yield the same performance and share the same optimal learning rate. We demonstrate that this is an incorrect intuition and that the first scheme (initializing B to zero and A to random) on average yields better performance compared to the other scheme. Our theoretical analysis shows that the reason behind this might be that the first initialization allows the use of larger learning rates (without causing output instability) compared to the second initial- ization, resulting in more efficient learning of the first scheme. We validate our results with extensive experiments on LLMs. 
    more » « less