Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available November 4, 2026
-
Free, publicly-accessible full text available November 10, 2026
-
Free, publicly-accessible full text available July 1, 2026
-
Long-context LLMs are increasingly in demand for applications such as retrieval-augmented generation. To defray the cost of pretraining LLMs over long contexts, recent work takes an approach of synthetic context extension: fine-tuning LLMs with synthetically generated long-context data in a post-training stage. However, it remains unclear how and why this synthetic context extension imparts abilities for downstream long-context tasks. In this paper, we investigate fine-tuning on synthetic data for three long-context tasks that require retrieval and reasoning. We vary the realism of "needle" concepts to be retrieved and diversity of the surrounding "haystack" context, from using LLMs to construct synthetic documents to using templated relations and creating symbolic datasets. We find that models trained on synthetic data fall short of the real data, but surprisingly, the mismatch can be interpreted and even predicted in terms of a special set of attention heads that are responsible for retrieval over long context, retrieval heads (Wu et al., 2024). The retrieval heads learned on synthetic data have high overlap with retrieval heads learned on real data, and there is a strong correlation between the recall of heads learned and the downstream performance of a model. Furthermore, with attention knockout and activation patching, we mechanistically show that retrieval heads are necessary and explain model performance, although they are not totally sufficient. Our results shed light on how to interpret synthetic data fine-tuning performance and how to approach creating better data for learning real-world capabilities over long contexts.more » « lessFree, publicly-accessible full text available May 27, 2026
-
Free, publicly-accessible full text available May 1, 2026
-
Abstract ObjectiveExtracting social determinants of health (SDoHs) from medical notes depends heavily on labor-intensive annotations, which are typically task-specific, hampering reusability and limiting sharing. Here, we introduce SDoH-GPT, a novel framework leveraging few-shot learning large language models (LLMs) to automate the extraction of SDoH from unstructured text, aiming to improve both efficiency and generalizability. Materials and MethodsSDoH-GPT is a framework including the few-shot learning LLM methods to extract the SDoH from medical notes and the XGBoost classifiers which continue to classify SDoH using the annotations generated by the few-shot learning LLM methods as training datasets. The unique combination of the few-shot learning LLM methods with XGBoost utilizes the strength of LLMs as great few shot learners and the efficiency of XGBoost when the training dataset is sufficient. Therefore, SDoH-GPT can extract SDoH without relying on extensive medical annotations or costly human intervention. ResultsOur approach achieved tenfold and twentyfold reductions in time and cost, respectively, and superior consistency with human annotators measured by Cohen's kappa of up to 0.92. The innovative combination of LLM and XGBoost can ensure high accuracy and computational efficiency while consistently maintaining 0.90+ AUROC scores. DiscussionThis study has verified SDoH-GPT on three datasets and highlights the potential of leveraging LLM and XGBoost to revolutionize medical note classification, demonstrating its capability to achieve highly accurate classifications with significantly reduced time and cost. ConclusionThe key contribution of this study is the integration of LLM with XGBoost, which enables cost-effective and high quality annotations of SDoH. This research sets the stage for SDoH can be more accessible, scalable, and impactful in driving future healthcare solutions.more » « lessFree, publicly-accessible full text available June 10, 2026
-
We consider Euler flows on two-dimensional (2-D) periodic domain and are interested in the stability, both linear and nonlinear, of a simple equilibrium given by the 2-D Taylor–Green vortex. As the first main result, numerical evidence is provided for the fact that such flows possess unstable eigenvalues embedded in the band of the essential spectrum of the linearized operator. However, the unstable eigenfunction is discontinuous at the hyperbolic stagnation points of the base flow and its regularity is consistent with the prediction of Lin (Intl Math. Res. Not., vol. 2004, issue 41, 2004, pp. 2147–2178). This eigenfunction gives rise to an exponential transient growth with the rate given by the real part of the eigenvalue followed by passage to a nonlinear instability. As the second main result, we illustrate a fundamentally different, non-modal, growth mechanism involving a continuous family of uncorrelated functions, instead of an eigenfunction of the linearized operator. Constructed by solving a suitable partial differential equation (PDE) optimization problem, the resulting flows saturate the known estimates on the growth of the semigroup related to the essential spectrum of the linearized Euler operator as the numerical resolution is refined. These findings are contrasted with the results of earlier studies of a similar problem conducted in a slightly viscous setting where only the modal growth of instabilities was observed. This highlights the special stability properties of equilibria in inviscid flows.more » « less
-
Functional Magnetic Resonance Image (fMRI) is commonly employed to study human brain activity, since it offers insight into the relationship between functional fluctuations and human behavior. To enhance analysis and comprehension of brain activity, Graph Neural Networks (GNNs) have been widely applied to the analysis of functional connectivities (FC) derived from fMRI data, due to their ability to capture the synergistic interactions among brain regions. However, in the human brain, performing complex tasks typically involves the activation of certain pathways, which could be represented as paths across graphs. As such, conventional GNNs struggle to learn from these pathways due to the long-range dependencies of multiple pathways. To address these challenges, we introduce a novel framework BrainMAP to learn multiple pathways in brain networks. BrainMAP leverages sequential models to identify long-range correlations among sequentialized brain regions and incorporates an aggregation module based on Mixture of Experts (MoE) to learn from multiple pathways. Our comprehensive experiments highlight BrainMAP's superior performance. Furthermore, our framework enables explanatory analyses of crucial brain regions involved in tasks.more » « lessFree, publicly-accessible full text available April 11, 2026
An official website of the United States government
