skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 1956229

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Cache management is a critical aspect of computer architecture, encompassing techniques such as cache replacement, bypassing, and prefetching. Existing research has often focused on individual techniques, overlooking the potential benefits of joint optimization. Moreover, many of these approaches rely on static and intuition-driven policies, limiting their performance under complex and dynamic workloads. To address these challenges, this paper introduces CHROME, a novel concurrencyaware cache management framework. CHROME takes a holistic approach by seamlessly integrating intelligent cache replacement and bypassing with pattern-based prefetching. By leveraging online reinforcement learning, CHROME dynamically adapts cache decisions based on multiple program features and applies a reward for each decision that considers the accuracy of the action and the system-level feedback information. Our performance evaluation demonstrates that CHROME outperforms current state-of-the-art schemes, exhibiting significant improvements in cache management. Notably, CHROME achieves a remarkable performance boost of up to 13.7% over the traditional LRU method in multi-core systems with only modest overhead. 
    more » « less
  2. This study addresses the knowledge gap in request-level storage trace analysis by incorporating workload characterization, com- pression, and synthesis. The aim is to better understand workload behavior and provide unique workloads for storage system test- ing under different scenarios. Machine learning techniques like K-means clustering and PCA analysis are employed to understand trace properties and reduce manual workload selection. By gener- ating synthetic workloads, the proposed method facilitates simu- lation and modeling-based studies of storage systems, especially for emerging technologies like Storage Class Memory (SCM) with limited workload availability. 
    more » « less
  3. Existing tiered memory systems all use DRAM-Preferred as their al- location policy whereby pages get allocated from higher-performing DRAM until it is filled after which all future allocations are made from lower-performing persistent memory (PM). The novel insight of this work is that the right page allocation policy for a workload can help to lower the access latencies for the newly allocated pages. We design, implement, and evaluate three page allocation policies within the real system deployment of the state-of-the-art dynamic tiering system. We observe that the right page allocation policy can improve the performance of a tiered memory system by as much as 17x for certain workloads. 
    more » « less
  4. Flash and non-volatile memory (NVM) devices have only a limited number of write-erase cycles. Consequently, when employed as caches, cache management policies may choose not to cache certain requested items in order to extend device lifespan. In this work, we propose a simple single-parameter utility function to model the trade-off between maximizing hit-rate and minimizing write-erase cycles for such caches, and study the problem of developing an off-line strategy for deciding whether to write a new item to cache, and if so which item already in the cache to replace. Our main result is, , an efficient, network flow based algorithm which finds optimal cache management policy under this new setting. 
    more » « less
  5. Storage is the Achilles heel of hybrid cloud deployments of workloads. Accessing persistent state over a WAN link, even a dedicated one, delivers an over-whelming performance blow to application performance. We propose FAB, a new storage architecture for the hybrid cloud. FAB addresses two major challenges for hybrid cloud storage, performance efficiency and backup efficiency. It does so by creating a new FAB layer in the storage stack that enables fault-tolerance, performance acceleration, and backup for FAB storage volumes. A preliminary evaluation of FAB's performance acceleration mechanism when deployed over Ceph's distributed block storage system offers encouragement to pursue this new hybrid cloud storage architecture. 
    more » « less