Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
High-performance computing is a driving force behind scientific innovation and discovery. However, as the number of users and the complexity of high-performance computing systems grow, so does the volume and variability of technical issues handled by sup- port teams. The evolving nature of these issues presents a need for automated tools that can extract clear, accurate, and relevant fre- quently asked questions directly from support tickets. This need was addressed by developing a novel pipeline that incorporates seman- tic clustering, representation learning, and large language models. While prior research laid strong foundations across classification, clustering and large language model-based questions & answers, our work augments these efforts by integrating semantic clustering, domain-specific summarization, and multi-stage generation into a scalable pipeline for autonomous technical support. To prioritize high-impact issues, the pipeline began by filtering tickets based on anomaly frequency and recency. It then leveraged an instruction- tuned large language model to clean and summarize each ticket into a structured issue-resolution pair. Next, unsupervised semantic clus- tering was performed to identify subclusters of semantically similar tickets within broader topic clusters. A large language model-based generation module was then applied to create frequently asked questions representing the most dominant issues. A structured evaluation by subject matter experts indicated that our approach transformed technical support tickets into understandable, factu- ally sound, and pertinent frequently asked questions. The ability to extract fine-grained insights from raw ticket data enhances the scalability, efficiency, and responsiveness of technical support work- flows in high-performance computing environments, ultimately enabling faster troubleshooting and more accessible pathways to scientific discovery.more » « lessFree, publicly-accessible full text available November 16, 2026
-
Chain of thought is a natural inference-time method for increasing the computational power of transformer-based large language models (LLMs), but comes at the cost of sequential decoding. Are there more efficient alternatives to expand a transformer's expressive power without adding parameters? We consider transformers with padding tokens as a form of parallelizable test-time compute. We show that averaging-hard-attention, masked-pre-norm transformers with polynomial padding recognize precisely the class FO -uniform TC of extremely parallelizable problems. While the TC upper bound was known, proving a matching lower bound had been elusive. Further, our novel analysis reveals the precise expanded power of padded transformers when coupled with another form of inference-time compute, namely dynamically increasing depth via looping. Our core technical contribution is to show how padding helps bring the notions of complete problems and reductions, which have been a cornerstone of classical complexity theory, to the formal study of transformers. Armed with this new tool, we prove that padded transformers with looping on inputs of length recognize exactly the class FO -uniform TC of moderately parallelizable problems. Thus, padding and looping together systematically expand transformers' expressive power: with polylogarithmic looping, polynomially padded transformers recognize precisely the class FO -uniform NC , the best that could be expected without losing parallelism (unless NCP ). Our results thus motivate further exploration of padding and looping as parallelizable alternatives to chain of thought for test-time compute.more » « lessFree, publicly-accessible full text available December 2, 2026
-
Free, publicly-accessible full text available November 21, 2026
-
Free, publicly-accessible full text available September 22, 2026
-
Free, publicly-accessible full text available July 1, 2026
-
Free, publicly-accessible full text available July 8, 2026
-
Free, publicly-accessible full text available June 15, 2026
-
Large-scale general domain pretraining followed by downstream-specific finetuning has become a predominant paradigm in machine learning. However, discrepancies between the pretraining and target domains can still lead to performance degradation in certain cases, underscoring the need for task-adaptive continued pretraining (TAP). TAP methods typically involve continued pretraining on task-specific unlabeled datasets or introducing additional unsupervised learning objectives to enhance model capabilities. While many TAP methods perform continued pretraining with multiple pretraining objectives, they often determine the tradeoff parameters between objectives manually, resulting in suboptimal outcomes and higher computational costs. In this paper, we propose TapWeight, a task-adaptive pretraining framework which automatically determines the optimal importance of each pretraining objective based on downstream feedback. TapWeight reweights each pretraining objective by solving a multi-level optimization problem. We applied TapWeight to both molecular property prediction and natural language processing tasks, significantly surpassing baseline methods. Experimental results validate the effectiveness and generalizability of TapWeight.more » « lessFree, publicly-accessible full text available June 11, 2026
-
PLRV-O: Advancing Differentially Private Deep Learning via Privacy Loss Random Variable OptimizationFree, publicly-accessible full text available November 19, 2026
-
Free, publicly-accessible full text available July 14, 2026
An official website of the United States government
