Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Free, publicly-accessible full text available May 8, 2026
-
Free, publicly-accessible full text available April 28, 2026
-
Free, publicly-accessible full text available March 21, 2026
-
Free, publicly-accessible full text available November 18, 2025
-
While sketch-based network telemetry is attractive, realizing its potential benefits has been elusive in practice. Existing sketch so- lutions offer low-level interfaces and impose high effort on op- erators to satisfy telemetry intents with required accuracies. Ex- tending these approaches to reduce effort results in inefficient deployments with poor accuracy-resource tradeoffs. We present SketchPlan, an abstraction layer for sketch-based telemetry to re- duce effort and achieve high efficiency. SketchPlan takes an en- semble view across telemetry intents and sketches, instead of ex- isting approaches that consider each intent-sketch pair in isola- tion. We show that SketchPlan improves accuracy-resource trade- offs by up-to 12x and up-to 60x vs. baselines, in single-node and network-wide settings. SketchPlan is open-sourced at: https: //github.com/milindsrivastava1997/SketchPlanmore » « lessFree, publicly-accessible full text available November 4, 2025
-
Free, publicly-accessible full text available August 4, 2025
-
Free, publicly-accessible full text available July 26, 2025
-
Federated Learning (FL) enables edge devices or clients to collaboratively train machine learning (ML) models without sharing their private data. Much of the existing work in FL focuses on efficiently learning a model for a single task. In this paper, we study simultaneous training of multiple FL models using a common set of clients. The few existing simultaneous training methods employ synchronous aggregation of client updates, which can cause significant delays because large models and/or slow clients can bottleneck the aggregation. On the other hand, a naive asynchronous aggregation is adversely affected by stale client updates. We propose FedAST, a buffered asynchronous federated simultaneous training algorithm that overcomes bottlenecks from slow models and adaptively allocates client resources across heterogeneous tasks. We provide theoretical convergence guarantees of FedAST for smooth non-convex objective functions. Extensive experiments over multiple real-world datasets demonstrate that our proposed method outperforms existing simultaneous FL approaches, achieving up to 46.0% reduction in time to train multiple tasks to completion.more » « lessFree, publicly-accessible full text available July 19, 2025