Estimating the εapproximate quantiles or ranks of a stream is a fundamental task in data monitoring. Given a stream x_1,..., x_n from a universe \mathcalU with total order, an additiveerror quantile sketch \mathcalM allows us to approximate the rank of any query y\in \mathcalU up to additive ε n error. In 2001, Greenwald and Khanna gave a deterministic algorithm (GK sketch) that solves the εapproximate quantiles estimation problem using O(ε^1 łog(ε n)) space \citegreenwald2001space ; recently, this algorithm was shown to be optimal by Cormode and Vesleý in 2020 \citecormode2020tight. However, due to the intricacy of the GK sketch and its analysis, oversimplified versions of the algorithm are implemented in practical applications, often without any known theoretical guarantees. In fact, it has remained an open question whether the GK sketch can be simplified while maintaining the optimal space bound. In this paper, we resolve this open question by giving a simplified deterministic algorithm that stores at most (2 + o(1))ε^1 łog (ε n) elements and solves the additiveerror quantile estimation problem; as a side benefit, our algorithm achieves a smaller constant factor than the \frac11 2 ε^1 łog(ε n) space bound in the original GK sketch~\citegreenwald2001space. Our algorithm features an easier analysis and still achieves the same optimal asymptotic space complexity as the original GK sketch. Lastly, our simplification enables an efficient data structure implementation, with a worstcase runtime of O(łog(1/ε) + łog łog (ε n)) perelement for the ordinary εapproximate quantile estimation problem. Also, for the related weighted'' quantile estimation problem, we give efficient data structures for our simplified algorithm which guarantee a worstcase perelement runtime of O(łog(1/ε) + łog łog (ε W_n/w_\textrmmin )), achieving an improvement over the previous upper bound of \citeassadi2023generalizing.
Learned Interpolation for Better Streaming Quantile Approximation with WorstCase Guarantees
An εapproximate quantile sketch over a stream of n inputs
approximates the rank of any query point q—that is, the
number of input points less than q—up to an additive error of
εn, generally with some probability of at least 1−1/ poly(n),
while consuming o(n) space. While the celebrated KLL
sketch of Karnin, Lang, and Liberty achieves a provably
optimal quantile approximation algorithm over worstcase
streams, the approximations it achieves in practice are
often far from optimal. Indeed, the most commonly used
technique in practice is Dunning’s tdigest, which often
achieves much better approximations than KLL on realworld
data but is known to have arbitrarily large errors in
the worst case. We apply interpolation techniques to the
streaming quantiles problem to attempt to achieve better
approximations on realworld data sets than KLL while
maintaining similar guarantees in the worst case.
more »
« less
 Award ID(s):
 2022448
 NSFPAR ID:
 10430249
 Date Published:
 Journal Name:
 SIAM Conference on Applied and Computational Discrete Algorithms
 Format(s):
 Medium: X
 Sponsoring Org:
 National Science Foundation
More Like this


Recently the long standing problem of optimal construction of quantile sketches was resolved by K arnin, L ang, and L iberty using the KLL sketch (FOCS 2016). The algorithm for KLL is restricted to online insert operations and no delete operations. For many realworld applications, it is necessary to support delete operations. When the data set is updated dynamically, i.e., when data elements are inserted and deleted, the quantile sketch should reflect the changes. In this paper, we propose KLL ± , the first quantile approximation algorithm to operate in the bounded deletion model to account for both inserts and deletes in a given data stream. KLL ± extends the functionality of KLL sketches to support arbitrary updates with small space overhead. The space bound for KLL ± is [EQUATION], where ∈ and δ are constants that determine precision and failure probability, and α bounds the number of deletions with respect to insert operations. The experimental evaluation of KLL ± highlights that with minimal space overhead, KLL ± achieves comparable accuracy in quantile approximation to KLL.more » « less

Estimating ranks, quantiles, and distributions over streaming data is a central task in data analysis and monitoring. Given a stream of n items from a data universe equipped with a total order, the task is to compute a sketch (data structure) of size polylogarithmic in n . Given the sketch and a query item y , one should be able to approximate its rank in the stream, i.e., the number of stream elements smaller than or equal to y . Most works to date focused on additive ε n error approximation, culminating in the KLL sketch that achieved optimal asymptotic behavior. This paper investigates multiplicative (1 ± ε)error approximations to the rank. Practical motivation for multiplicative error stems from demands to understand the tails of distributions, and hence for sketches to be more accurate near extreme values. The most spaceefficient algorithms due to prior work store either O (log (ε 2 n )/ε 2 ) or O (log 3 (ε n )/ε) universe items. We present a randomized sketch storing O (log 1.5 (ε n )/ε) items that can (1 ± ε)approximate the rank of each universe item with high constant probability; this space bound is within an \(O(\sqrt {\log (\varepsilon n)}) \) factor of optimal. Our algorithm does not require prior knowledge of the stream length and is fully mergeable, rendering it suitable for parallel and distributed computing environments.more » « less

Approximating quantiles and distributions over streaming data has been studied for roughly two decades now. Recently, Karnin, Lang, and Liberty proposed the first asymptotically optimal algorithm for doing so. This manuscript complements their theoretical result by providing a practical variants of their algorithm with improved constants. For a given sketch size, our techniques provably reduce the upper bound on the sketch error by a factor of two. These improvements are verified experimentally. Our modified quantile sketch improves the latency as well by reducing the worstcase update time from O(1ε) down to O(log1ε).more » « less

Koyejo, S. ; Mohamed, S. ; Agarwal, A. ; Belgrave, D. ; Cho, K. ; Oh, A. (Ed.)While much progress has been made in understanding the minimax sample complexity of reinforcement learning (RL)—the complexity of learning on the “worstcase” instance—such measures of complexity often do not capture the true difficulty of learning. In practice, on an “easy” instance, we might hope to achieve a complexity far better than that achievable on the worstcase instance. In this work we seek to understand the “instancedependent” complexity of learning nearoptimal policies (PAC RL) in the setting of RL with linear function approximation. We propose an algorithm, Pedel, which achieves a finegrained instancedependent measure of complexity, the first of its kind in the RL with function approximation setting, thereby capturing the difficulty of learning on each particular problem instance. Through an explicit example, we show that Pedel yields provable gains over lowregret, minimaxoptimal algorithms and that such algorithms are unable to hit the instanceoptimal rate. Our approach relies on a novel online experiment designbased procedure which focuses the exploration budget on the “directions” most relevant to learning a nearoptimal policy, and may be of independent interest.more » « less