skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Revamping Storage Class Memory With Hardware Automated Memory-Over-Storage Solution
Award ID(s):
1908793
PAR ID:
10295400
Author(s) / Creator(s):
Date Published:
Journal Name:
ISCA
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Fast networks and the desire for high resource utilization in data centers and the cloud have driven disaggregation. Application compute is separated from storage, but this leads to high overheads when data must move over the network for simple operations on it. Alternatively, systems could allow applications to run application logic within storage via user-defined functions. Unfortunately, this ties provisioning and utilization of storage and compute resources together again. We present a new approach to executing storage-level functions in an in-memory key-value store that avoids this problem by dynamically deciding where to execute functions over data. Users write storage functions that are logically decoupled from storage, but storage servers choose where to run invocations of these functions physically. By using a server-internal cost model and observing function execution, servers choose to directly run inexpensive functions, while preferring to execute functions with high CPU-cost at client machines. We show that with this approach storage servers can reduce network request processing costs, avoid server compute bottlenecks, and improve aggregate storage system throughput. We realize our approach on an in-memory key-value store that executes 3.2 million strict serializable user-defined storage functions per second with 100 us response times. When running a mix of logic from different applications, it provides throughput better than running that logic purely at storage servers (85% more) or purely at clients (10% more). For our workloads, it also reduces latency (up to 2x) and transactional aborts (up to 33%) over pure client-side execution. 
    more » « less
  2. null; Mangun, G.R.; Gazzaniga, M.S. (Ed.)
    The human ability to remember unique experiences from many years ago comes so naturally that we often take it for granted. It depends on three stages: (1) encoding, when new information is initially registered, (2) storage, when encoded information is held in the brain, and (3) retrieval, when stored information is used. Historically, cognitive neuroscience studies of memory have emphasized encoding and retrieval. Yet, the intervening stage may hold the most intrigue, and has become a major research focus in the years since the last edition of this book. Here we describe recent investigations of post-acquisition memory processing in relation to enduring storage. This evidence of memory processing belies the notion that memories stored in the brain are held in stasis, without changing. Various methods for influencing and monitoring brain activity have been applied to study offline memory processing. In particular, memories can be reactivated during sleep and during resting periods, with distinctive physiological correlates. These neural signals shed light on the contribution of hippocampal-neocortical interactions to memory consolidation. Overall, results converge on the notion that memory reactivation is a critical determinant of systems-level consolidation, and thus of future remembering, which in turn facilitates future planning and problem solving. 
    more » « less
  3. Information is an important resource. Storing and retrieving information faithfully are huge challenges and many methods have been developed to understand the principles behind robust information processing. In this review, we focus on information storage and retrieval from the perspective of energetics, dynamics, and statistical mechanics. We first review the Hopfield model of associative memory, the classic energy-based model of memory. We then discuss generalizations and physical realizations of the Hopfield model. Finally, we highlight connections to energy-based neural networks used in deep learning. We hope this review inspires new directions along the lines of information storage and retrieval in physical systems. 
    more » « less
  4. Log-Structured Merge-trees (LSM-trees) have been widely used in modern NoSQL systems. Due to their out-of-place update design, LSM-trees have introduced memory walls among the memory components of multiple LSM-trees and between the write memory and the buffer cache. Optimal memory allocation among these regions is non-trivial because it is highly workload-dependent. Existing LSM-tree implementations instead adopt static memory allocation schemes due to their simplicity and robustness, sacrificing performance. In this paper, we attempt to break down these memory walls in LSM-based storage systems. We first present a memory management architecture that enables adaptive memory management. We then present a partitioned memory component structure with new flush policies to better exploit the write memory to minimize the write cost. To break down the memory wall between the write memory and the buffer cache, we further introduce a memory tuner that tunes the memory allocation between these two regions. We have conducted extensive experiments in the context of Apache AsterixDB using the YCSB and TPC-C benchmarks and we present the results here. 
    more » « less