skip to main content


Title: Coeus: Clustering (A)like Patterns for Practical Machine Intelligent Hybrid Memory Management
Emerging workloads benefit from massive memory capacities provided by hybrid memory platforms. Recent system-level hybrid memory management solutions integrate machine learning methods to better predict complex data access behaviors. Given the substantial associated learning overheads, such solutions train parallel recurrent neural networks to learn the access patterns at the granularity of a page for a carefully selected page subset. Our observation reveals that the size of this subset varies immensely across workload classes, sizes and patterns. Increasing the granularity at the level of a page group will help reduce the aggregate learning overheads. Yet, unsupervised machine learning clustering methods are not practical to use in this context. Instead, this paper builds Coeus - a page grouping mechanism for machine learning-based hybrid memory management. Coeus is simple, robust and efficient. Coeus leverages data reuse insights to fine-tune the granularity at which patterns are interpreted by the system. As a result, Coeus creates large clusters of pages that share the same access behavior, in a practical way. Coeus reduces by almost 3x the associated learning overheads. In addition, Coeus achieves 3x higher application performance, by the combined effects of applying machine learning to more pages and by performing management operations at better granularity, compared to configurations of existing hybrid memory managers.  more » « less
Award ID(s):
2016701
NSF-PAR ID:
10469909
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
ISBN:
978-1-6654-9956-9
Page Range / eLocation ID:
615 to 624
Format(s):
Medium: X
Location:
Taormina, Italy
Sponsoring Org:
National Science Foundation
More Like this
  1. Current state-of-the-art systems for hybrid memory management are enriched with machine intelligence. To enable the practical use of Machine Learning (ML), system-level page schedulers focus the ML model training over a small subset of the applications’ memory footprint. At the same time, they use existing lightweight historical information to predict the access behavior of majority of the pages. To maximize application performance improvements, the pages selected for machine learning-based management are identified with elaborate page selection methods. These methods involve the calculation of detailed performance estimates depending on the configuration of the hybrid memory platform. This paper explores the opportunities to reduce such operational overheads of machine learning-based hybrid memory page schedulers via use of visualization techniques to depict memory access patterns, and reveal spatial and temporal correlations among the selected pages, that current methods fail to leverage. We propose an initial version of a visualization pipeline for prioritizing pages for machine learning, that is independent of the hybrid memory configuration. Our approach selects pages whose ML-based management delivers, on average, performance levels within 5% of current solutions, while reducing by 75 × the page selection time. We discuss future directions and make a case that visualization and computer vision methods can unlock new insights and reduce the operational complexity of emerging systems solutions. 
    more » « less
  2. Current state-of-the-art resource management systems leverage Machine Learning (ML) methods to enable the efficient use of heterogeneous memory hardware, deployed across emerging computing platforms. While machine intelligence can be effectively used to learn and predict complex data access patterns of modern analytics, the use of ML over the exploded data sizes and memory footprints is prohibitive for its practical system-level integration. For this reason, recent solutions use existing lightweight historical information to predict the access behavior of majority of the application pages, and train ML models over a small page subset. To maximize application performance improvements, the pages selected for machine learning-based management are identified with elaborate page selection methods. These methods involve the calculation of detailed performance estimates depending on the configuration of the hybrid memory platform. This paper aims to reduce such vast operational overheads, that further exacerbate the existing high overheads of using machine intelligence, in return for high performance and efficiency. To this end, we build Cronus, an image-based pipeline for selecting pages for ML-based management. We visualize memory access patterns and reveal spatial and temporal correlations among the selected pages, that current methods fail to leverage. We then use the created images to detect patterns and select page groups for machine learning model deployment. Cronus drastically reduces the operational costs, while preserving the effectiveness of the page selection and achieved performance of machine intelligent hybrid memory management. This work makes a case that visualization and computer vision methods can unlock new insights and reduce the operational complexity of emerging systems solutions. 
    more » « less
  3. The increasing demand of big data analytics for more main memory capacity in datacenters and exascale computing environments is driving the integration of heterogeneous memory technologies. The new technologies exhibit vastly greater differences in access latencies, bandwidth and capacity compared to the traditional NUMA systems. Leveraging this heterogeneity while also delivering application performance enhancements requires intelligent data placement. We present Kleio, a page scheduler with machine intelligence for applications that execute across hybrid memory components. Kleio is a hybrid page scheduler that combines existing, lightweight, history-based data tiering methods for hybrid memory, with novel intelligent placement decisions based on deep neural networks. We contribute new understanding toward the scope of benefits that can be achieved by using intelligent page scheduling in comparison to existing history-based approaches, and towards the choice of the deep learning algorithms and their parameters that are effective for this problem space. Kleio incorporates a new method for prioritizing pages that leads to highest performance boost, while limiting the resulting system resource overheads. Our performance evaluation indicates that Kleio reduces on average 80% of the performance gap between the existing solutions and an oracle with knowledge of future access pattern. Kleio provides hybrid memory systems with fast and effective neural network training and prediction accuracy levels, which bring significant application performance improvements with limited resource overheads, so as to lay the grounds for its practical integration in future systems. 
    more » « less
  4. Intel's SGX offers state-of-the-art security features, including confidentiality, integrity, and authentication (CIA) when accessing sensitive pages in memory. Sensitive pages are placed in an Enclave Page Cache (EPC) within the physical memory before they can be accessed by the processor. To control the overheads imposed by CIA guarantees, the EPC operates with a limited capacity (currently 128 MB). Because of this limited EPC size, sensitive pages must be frequently swapped between EPC and non-EPC regions in memory. A page swap is expensive (about 40K cycles) because it requires an OS system call, page copying, updates to integrity trees and metadata, etc. Our analysis shows that the paging overhead can slow the system on average by 5×, and other studies have reported even higher slowdowns for memory-intensive workloads. The paging overhead can be reduced by growing the size of the EPC to match the size of physical memory, while allowing the EPC to also accommodate non-sensitive pages. However, at least two important problems must be addressed to enable this growth in EPC: (i) the depth of the integrity tree and its cacheability must be improved to keep memory bandwidth overheads in check, (ii) the space overheads of integrity verification (tree and MACs) must be reduced. We achieve both goals by introducing a variable arity unified tree (VAULT) organization that is more compact and has lower depth. We further reduce the space overheads with techniques that combine MAC sharing and compression. With simulations, we show that the combination of our techniques can address most inefficiencies in SGX memory access and improve overall performance by 3.7×, relative to an SGX baseline, while incurring a memory capacity over-head of only 4.7%. 
    more » « less
  5. The demand for memory is ever increasing. Many prior works have explored hardware memory compression to increase effective memory capacity. However, prior works compress and pack/migrate data at a small - memory blocklevel - granularity; this introduces an additional block-level translation after the page-level virtual address translation. In general, the smaller the granularity of address translation, the higher the translation overhead. As such, this additional block-level translation exacerbates the well-known address translation problem for large and/or irregular workloads. A promising solution is to only save memory from cold (i.e., less recently accessed) pages without saving memory from hot (i.e., more recently accessed) pages (e.g., keep the hot pages uncompressed); this avoids block-level translation overhead for hot pages. However, it still faces two challenges. First, after a compressed cold page becomes hot again, migrating the page to a full 4KB DRAM location still adds another level (albeit page-level, instead of block-level) of translation on top of existing virtual address translation. Second, only compressing cold data require compressing them very aggressively to achieve high overall memory savings; decompressing very aggressively compressed data is very slow (e.g., > 800ns assuming the latest Deflate ASIC in industry). This paper presents Translation-optimized Memory Compression for Capacity (TMCC) to tackle the two challenges above. To address the first challenge, we propose compressing page table blocks in hardware to opportunistically embed compression translations into them in a software-transparent manner to effectively prefetch compression translations during a page walk, instead of serially fetching them after the walk. To address the second challenge, we perform a large design space exploration across many hardware configurations and diverse workloads to derive and implement in HDL an ASIC Deflate that is specialized for memory; for memory pages, it is 4X as fast as the state-of-the art ASIC Deflate, with little to no sacrifice in compression ratio. Our evaluations show that for large and/or irregular workloads, TMCC can either improve performance by 14% without sacrificing effective capacity or provide 2.2x the effective capacity without sacrificing performance compared to a stateof-the-art hardware memory compression for capacity. 
    more » « less