skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Computing Accurate Probabilistic Estimates of One-D Entropy from Equiprobable Random Samples
We develop a simple Quantile Spacing (QS) method for accurate probabilistic estimation of one-dimensional entropy from equiprobable random samples, and compare it with the popular Bin-Counting (BC) and Kernel Density (KD) methods. In contrast to BC, which uses equal-width bins with varying probability mass, the QS method uses estimates of the quantiles that divide the support of the data generating probability density function (pdf) into equal-probability-mass intervals. And, whereas BC and KD each require optimal tuning of a hyper-parameter whose value varies with sample size and shape of the pdf, QS only requires specification of the number of quantiles to be used. Results indicate, for the class of distributions tested, that the optimal number of quantiles is a fixed fraction of the sample size (empirically determined to be ~0.25–0.35), and that this value is relatively insensitive to distributional form or sample size. This provides a clear advantage over BC and KD since hyper-parameter tuning is not required. Further, unlike KD, there is no need to select an appropriate kernel-type, and so QS is applicable to pdfs of arbitrary shape, including those with discontinuous slope and/or magnitude. Bootstrapping is used to approximate the sampling variability distribution of the resulting entropy estimate, and is shown to accurately reflect the true uncertainty. For the four distributional forms studied (Gaussian, Log-Normal, Exponential and Bimodal Gaussian Mixture), expected estimation bias is less than 1% and uncertainty is low even for samples of as few as 100 data points; in contrast, for KD the small sample bias can be as large as −10% and for BC as large as −50%. We speculate that estimating quantile locations, rather than bin-probabilities, results in more efficient use of the information in the data to approximate the underlying shape of an unknown data generating pdf.  more » « less
Award ID(s):
1740858
PAR ID:
10294762
Author(s) / Creator(s):
; ; ; ; ;
Date Published:
Journal Name:
Entropy
Volume:
23
Issue:
6
ISSN:
1099-4300
Page Range / eLocation ID:
740
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This paper extends the application of quantile-based Bayesian inference to probability distributions defined in terms of quantiles of observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and parameter interpretability, making them useful for eliciting information about observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a variant of the Dirichlet prior. We discuss the resulting hybrid expert elicitation protocol, which aims to characterize uncertainty in parameters by asking questions about observable quantities. We also compare and contrast this approach with parametric and predictive elicitation methods. 
    more » « less
  2. Abstract Accurate classification of high‐dimensional data is important in many scientific applications. We propose a family of high‐dimensional classification methods based upon a comparison of the component‐wise distances of the feature vector of a sample to the within‐class population quantiles. These methods are motivated by the fact that quantile classifiers based on these component‐wise distances are the most powerful univariate classifiers for an optimal choice of the quantile level. A simple aggregation approach for constructing a multivariate classifier based upon these component‐wise distances to the within‐class quantiles is proposed. It is shown that this classifier is consistent with the asymptotically optimal classifier as the sample size increases. Our proposed classifiers result in simple piecewise‐linear decision rule boundaries that can be efficiently trained. Numerical results are shown to demonstrate competitive performance for the proposed classifiers on both simulated data and a benchmark email spam application. 
    more » « less
  3. Radial distribution functions (RDFs) are widely used in molecular simulation and beyond. Most approaches to computing RDFs require assembling a histogram over inter-particle separation distances. In turn, these histograms require a specific (and generally arbitrary) choice of discretization for bins. We demonstrate that this arbitrary choice for binning can lead to significant and spurious phenomena in several commonplace molecular-simulation analyses that make use of RDFs, such as identifying phase boundaries and generating excess entropy scaling relationships. We show that a straightforward approach (which we term Kernel-Averaging Method to Eliminate Length-Of-Bin Effects) mitigates these issues. This approach is based on systematic and mass-conserving mollification of RDFs using a Gaussian kernel. This technique has several advantages compared to existing methods, including being useful for cases where the original particle kinematic data have not been retained, and the only available data are the RDFs themselves. We also discuss the optimal implementation of this approach in the context of several application areas. 
    more » « less
  4. An ε-approximate quantile sketch over a stream of n inputs approximates the rank of any query point q—that is, the number of input points less than q—up to an additive error of εn, generally with some probability of at least 1−1/ poly(n), while consuming o(n) space. While the celebrated KLL sketch of Karnin, Lang, and Liberty achieves a provably optimal quantile approximation algorithm over worst-case streams, the approximations it achieves in practice are often far from optimal. Indeed, the most commonly used technique in practice is Dunning’s t-digest, which often achieves much better approximations than KLL on realworld data but is known to have arbitrarily large errors in the worst case. We apply interpolation techniques to the streaming quantiles problem to attempt to achieve better approximations on real-world data sets than KLL while maintaining similar guarantees in the worst case. 
    more » « less
  5. Sample entropy, an approximation of the Kolmogorov entropy, was proposed to characterize complexity of a time series, which is essentially defined as −log(B/A), where B denotes the number of matched template pairs with length m and A denotes the number of matched template pairs with m+1, for a predetermined positive integer m. It has been widely used to analyze physiological signals. As computing sample entropy is time consuming, the box-assisted, bucket-assisted, x-sort, assisted sliding box, and kd-tree-based algorithms were proposed to accelerate its computation. These algorithms require O(N2) or O(N2−1m+1) computational complexity, where N is the length of the time series analyzed. When N is big, the computational costs of these algorithms are large. We propose a super fast algorithm to estimate sample entropy based on Monte Carlo, with computational costs independent of N (the length of the time series) and the estimation converging to the exact sample entropy as the number of repeating experiments becomes large. The convergence rate of the algorithm is also established. Numerical experiments are performed for electrocardiogram time series, electroencephalogram time series, cardiac inter-beat time series, mechanical vibration signals (MVS), meteorological data (MD), and 1/f noise. Numerical results show that the proposed algorithm can gain 100–1000 times speedup compared to the kd-tree and assisted sliding box algorithms while providing satisfactory approximate accuracy. 
    more » « less