skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2415216

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. With the rapid innovation of GPUs, heterogeneous GPU clusters in both public clouds and on-premise data centers have become increasingly commonplace. In this paper, we demonstrate how pipeline parallelism, a technique wellstudied for throughput-oriented deep learning model training, can be used effectively for serving latency-bound model inference, e.g., in video analytics systems, on heterogeneous GPU clusters. Our work exploits the synergy between diversity in model layers and diversity in GPU architectures, which results in comparable inference latency for many layers when running on low-class and high-class GPUs. We explore how such overlooked capability of low-class GPUs can be exploited using pipeline parallelism and present a novel inference serving system, PPipe, that employs pool-based pipeline parallelism via an MILP-based control plane and a data plane that performs resource reservation-based adaptive batching. Evaluation results on diverse workloads (18 CNN models) show that PPipe achieves 41.1%–65.5% higher utilization of low-class GPUs while maintaining high utilization of high-class GPUs, leading to 32.2%–75.1% higher serving throughput compared to various baselines. 
    more » « less
    Free, publicly-accessible full text available July 9, 2026
  2. We present POPPER, a dataflow system for building Machine Learning (ML) workflows. A novel aspect of POPPER is its built-in support for in-flight error handling, which is crucial in developing effective ML workflows. POPPER provides a convenient API that allows users to create and execute complex workflows comprising traditional data processing operations (such as map, filter, and join) and user-defined error handlers. The latter enables inflight detection and correction of errors introduced by ML models in the workflows. Inside POPPER, we model the workflow as a reactive dataflow, a directed cyclic graph, to achieve efficient execution through pipeline parallelization. We demonstrate the in-flight error-handling capabilities of POPPER, for which we have built a graphical interface, allowing users to specify workflows, visualize and interact with its reactive dataflow, and delve into the internals of POPPER. 
    more » « less
    Free, publicly-accessible full text available May 19, 2026