skip to main content


Title: Buffered Hash Table: Leveraging DRAM to Enhance Hash Indexes in the Persistent Memory
Award ID(s):
1815303
NSF-PAR ID:
10393756
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
2022 IEEE 11th Non-Volatile Memory Systems and Applications Symposium (NVMSA)
Page Range / eLocation ID:
8 to 13
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract The leftover hash lemma (LHL) is used in the analysis of various lattice-based cryptosystems, such as the Regev and Dual-Regev encryption schemes as well as their leakage-resilient counterparts. The LHL does not hold in the ring setting, when the ring is far from a field, which is typical for efficient cryptosystems. Lyubashevsky et al . (Eurocrypt ’13) proved a “regularity lemma,” which can be used instead of the LHL, but applies only for Gaussian inputs. This is in contrast to the LHL, which applies when the input is drawn from any high min-entropy distribution. Our work presents an approach for generalizing the “regularity lemma” of Lyubashevsky et al . to certain conditional distributions. We assume the input was sampled from a discrete Gaussian distribution and consider the induced distribution, given side-channel leakage on the input. We present three instantiations of our approach, proving that the regularity lemma holds for three natural conditional distributions. 
    more » « less
  2. Hashing is a fundamental operation in database management, playing a key role in the implementation of numerous core database data structures and algorithms. Traditional hash functions aim to mimic a function that maps a key to a random value, which can result in collisions, where multiple keys are mapped to the same value. There are many well-known schemes like chaining, probing, and cuckoo hashing to handle collisions. In this work, we aim to study if using learned models instead of traditional hash functions can reduce collisions and whether such a reduction translates to improved performance, particularly for indexing and joins. We show that learned models reduce collisions in some cases, which depend on how the data is distributed. To evaluate the effectiveness of learned models as hash function, we test them with bucket chaining, linear probing, and cuckoo hash tables. We find that learned models can (1) yield a 1.4x lower probe latency, and (2) reduce the non-partitioned hash join runtime with 28% over the next best baseline for certain datasets. On the other hand, if the data distribution is not suitable, we either do not see gains or see worse performance. In summary, we find that learned models can indeed outperform hash functions, but only for certain data distributions. 
    more » « less