skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on November 15, 2026

Title: Modular Architecture for High-Performance and Low Overhead Data Transfers
High-performance applications necessitate rapid and dependable transfer of massive datasets across geographically dispersed locations. Traditional file transfer tools often suffer from resource underutilization and instability due to fixed configurations or monolithic optimization methods. We propose AutoMDT, a novel Modular Data Transfer Architecture, to address these issues by employing a deep reinforcement learning agent to simultaneously optimize concurrency levels for read, network, and write operations. This solution incorporates a lightweight network–system simulator, enabling offline training of a Proximal Policy Optimization (PPO) agent in approximately 45 minutes on average, thereby overcoming the impracticality of lengthy online training in production networks. AutoMDT’s modular design decouples I/O and network tasks, allowing the agent to capture complex buffer dynamics precisely and adapt to changing system and network conditions quickly. Evaluations on production-grade testbeds show that AutoMDT achieves up to 8x faster convergence and a reduction in transfer completion times compared to state-of-the-art solutions.  more » « less
Award ID(s):
2451376
PAR ID:
10648956
Author(s) / Creator(s):
; ;
Publisher / Repository:
ACM
Date Published:
ISBN:
9798400718717
Page Range / eLocation ID:
939 to 948
Subject(s) / Keyword(s):
Data Transfer Optimization, High-Performance Networks, Concurrency Control, Reinforcement Learning, HPC
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper describes how domain knowledge of power system operators can be integrated into reinforcement learning (RL) frameworks to effectively learn agents that control the grid's topology to prevent thermal cascading. Typical RL-based topology controllers fail to perform well due to the large search/optimization space. Here, we propose an actor-critic-based agent to address the problem's combinatorial nature and train the agent using the RL environment developed by RTE, the French TSO. To address the challenge of the large optimization space, a curriculum-based approach with reward tuning is incorporated into the training procedure by modifying the environment using network physics for enhanced agent learning. Further, a parallel training approach on multiple scenarios is employed to avoid biasing the agent to a few scenarios and make it robust to the natural variability in grid operations. Without these modifications to the training procedure, the RL agent failed for most test scenarios, illustrating the importance of properly integrating domain knowledge of physical systems for real-world RL learning. The agent was tested by RTE for the 2019 learning to run the power network challenge and was awarded the 2nd place in accuracy and 1st place in speed. The developed code is open-sourced for public use. Analysis of a simple system proves the enhancement in training RL-agents using the curriculum. 
    more » « less
  2. Multi-Agent Reinforcement Learning (MARL) is a key technology in artificial intelligence applications such as robotics, surveillance, energy systems, etc. Multi-Agent Deep Deterministic Policy Gradient (MADDPG) is a state-of-the-art MARL algorithm that has been widely adopted and considered a popular baseline for novel MARL algorithms. However, existing implementations of MADDPG on CPU and CPU-GPU platforms do not exploit fine-grained parallelism between cooperative agents and handle inter-agent communication sequentially, leading to sub-optimal throughput performance in MADDPG training. In this work, we develop the first high-throughput MADDPG accelerator on a CPU-FPGA heterogeneous platform. Specifically, we develop dedicated hardware modules that enable parallel training of each agent's internal Deep Neural Networks (DNNs) and support low-latency inter-agent communication using an on-chip agent interconnection network. Our experimental results show that the speed performance of agent neural network training improves by a factor of 3.6×−24.3× and 1.5×−29.5× compared with state-of-the-art CPU and CPU-GPU implementations. Our design achieves up to a 1.99× and 1.93× improvement in overall system throughput compared with CPU and CPU-GPU implementations, respectively. 
    more » « less
  3. Robots are often built from standardized assemblies, (e.g. arms, legs, or fingers), but each robot must be trained from scratch to control all the actuators of all the parts together. In this paper we demonstrate a new approach that takes a single robot and its controller as input and produces a set of modular controllers for each of these assemblies such that when a new robot is built from the same parts, its control can be quickly learned by reusing the modular controllers. We achieve this with a framework called MeMo which learns (Me)aningful, (Mo)dular controllers. Specifically, we propose a novel modularity objective to learn an appropriate division of labor among the modules. We demonstrate that this objective can be optimized simultaneously with standard behavior cloning loss via noise injection. We benchmark our framework in locomotion and grasping environments on simple to complex robot morphology transfer. We also show that the modules help in task transfer. On both structure and task transfer, MeMo achieves improved training efficiency to graph neural network and Transformer baselines. 
    more » « less
  4. Multi-agent large language models promise flexible, modular architectures for delivering personalized educational content. Drawing on a pilot randomized controlled trial with middle school students (n = 23), we introduce a two-agent GPT-4 framework in which a Profiler agent infers learner-specific preferences and a Rewrite agent dynamically adapts science passages via an explicit message-passing protocol. We implement structured system and user prompts as inter-agent communication schemas to enable real-time content adaptation. The results of an ordinal logistic regression analysis hinted that students may be more likely to prefer texts aligned with their profile, demonstrating the feasibility of multi-agent system-driven personalization and highlighting the need for additional work to build upon this pilot study. Beyond empirical validation, we present a modular multi-agent architecture detailing agent roles, communication interfaces, and scalability considerations. We discuss design best practices, ethical safeguards, and pathways for extending this framework to collaborative agent networks—such as feedback-analysis agents—in K-12 settings. These results advance both our theoretical and applied understanding of multi-agent LLM systems for personalized learning. 
    more » « less
  5. Gibbons, PhillipB; Pekhimenko, Gennady; De_Sa, Christopher (Ed.)
    The emergence of ML in various cloud system management tasks (e.g., workload autoscaling and job scheduling) has become a core driver of ML-centric cloud platforms. However, there are still numerous algorithmic and systems challenges that prevent ML-centric cloud platforms from being production-ready. In this paper, we focus on the challenges of model performance variability and costly model retraining, introduced by dynamic workload patterns and heterogeneous applications and infrastructures in cloud environments. To address these challenges, we present FLASH, an extensible framework for fast model adaptation in ML-based system management tasks. We show how FLASH leverages existing ML agents and their training data to learn to generalize across applications/environments with meta-learning. FLASH can be easily integrated with an existing ML-based system management agent with a unified API. We demonstrate the use of FLASH by implementing three existing ML agents that manage (1) resource configurations, (2) autoscaling, and (3) server power. Our experiments show that FLASH enables fast adaptation to new, previously unseen applications/environments (e.g., 5.5× faster than transfer learning in the autoscaling task), indicating significant potential for adopting ML-centric cloud platforms in production. 
    more » « less