A classic problem in statistics is the estimation of the expectation of random variables from samples. This gives rise to the tightly connected problems of deriving concentration inequalities and confidence sequences, i.e., confidence intervals that hold uniformly over time. Previous studies have shown that it is possible to convert the regret guarantee of an online learning algorithm into concentration inequalities, but these concentration results were not tight. In this paper, we show regret guarantees of universal portfolio algorithms applied to the online learning problem of betting give rise to new implicit time-uniform concentration inequalities for bounded random variables. The key feature of our concentration results is that they are centered around the maximum log wealth of the best fixed betting strategy in hindsight. We propose numerical methods to solve these implicit inequalities, which results in confidence sequences that enjoy the empirical Bernstein rate with the optimal asymptotic behavior while being never worse than Bernoulli-KL confidence bounds. We further show that our confidence sequences are never vacuous with even one sample, for any given target failure rate δ∈(0,1). Our empirical study shows that our confidence bounds achieve the state-of-the-art performance, especially in the small sample regime.
more »
« less
Consistency of empirical distributions of sequences of graph statistics in networks with dependent edges
One of the first steps in applications of statistical network analysis is frequently to produce summary charts of important features of the network. Many of these features take the form of sequences of graph statistics counting the number of realized events in the network, examples of which are degree distributions, edgewise shared partner distributions, and more. We provide conditions under which the empirical distributions of sequences of graph statistics are consistent in the L-infinity-norm in settings where edges in the network are dependent. We accomplish this task by deriving concentration inequalities that bound probabilities of deviations of graph statistics from the expected value under weak dependence conditions. We apply our concentration inequalities to empirical distributions of sequences of graph statistics and derive non-asymptotic bounds on the L-infinity-error which hold with high probability. Our non-asymptotic results are then extended to demonstrate uniform convergence almost surely in selected examples. We illustrate theoretical results through examples, simulation studies, and an application.
more »
« less
- Award ID(s):
- 2345043
- PAR ID:
- 10590180
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Journal of Multivariate Analysis
- Volume:
- 207
- ISSN:
- 0047-259X
- Page Range / eLocation ID:
- 105420
- Subject(s) / Keyword(s):
- Empirical distributions of graph statistics Network data Statistical network analysis
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We consider the problem of estimating the mean of a sequence of random elements f (θ, X_1) , . . . , f (θ, X_n) where f is a fixed scalar function, S = (X_1, . . . , X_n) are independent random variables, and θ is a possibly S-dependent parameter. An example of such a problem would be to estimate the generalization error of a neural network trained on n examples where f is a loss function. Classically, this problem is approached through concentration inequalities holding uniformly over compact parameter sets of functions f , for example as in Rademacher or VC type analysis. However, in many problems, such inequalities often yield numerically vacuous estimates. Recently, the PAC-Bayes framework has been proposed as a better alternative for this class of problems for its ability to often give numerically non-vacuous bounds. In this paper, we show that we can do even better: we show how to refine the proof strategy of the PAC-Bayes bounds and achieve even tighter guarantees. Our approach is based on the coin-betting framework that derives the numerically tightest known time-uniform concentration inequalities from the regret guarantees of online gambling algorithms. In particular, we derive the first PAC-Bayes concentration inequality based on the coin-betting approach that holds simultaneously for all sample sizes. We demonstrate its tightness showing that by relaxing it we obtain a number of previous results in a closed form including Bernoulli-KL and empirical Bernstein inequalities. Finally, we propose an efficient algorithm to numerically calculate confidence sequences from our bound, which often generates nonvacuous confidence bounds even with one sample, unlike the state-of-the-art PAC-Bayes bounds.more » « less
-
null (Ed.)Under Poincare-type conditions, upper bounds are explored for the Kolmogorov distance between the distributions of weighted sums of dependent summands and the normal law. Based on improved concentration inequalities on high-dimensional Euclidean spheres, the results extend and refine previous results to non-symmetric models.more » « less
-
Statistical distances (SDs), which quantify the dissimilarity between probability distributions, are central to machine learning and statistics. A modern method for estimating such distances from data relies on parametrizing a variational form by a neural network (NN) and optimizing it. These estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. In particular, there seems to be a fundamental tradeoff between the two sources of error involved: approximation and estimation. While the former needs the NN class to be rich and expressive, the latter relies on controlling complexity. This paper explores this tradeoff by means of non-asymptotic error bounds, focusing on three popular choices of SDs—Kullback-Leibler divergence, chi-squared divergence, and squared Hellinger distance. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory. Numerical results validating the theory are also provided.more » « less
-
Consider a network of agents that all want to guess the correct value of some ground truth state. In a sequential order, each agent makes its decision using a single private signal which has a constant probability of error, as well as observations of actions from its network neighbors earlier in the order. We are interested in enabling network-wide asymptotic truth learning – that in a network of n agents, almost all agents make a correct prediction with probability approaching one as n goes to infinity. In this paper we study both random orderings and carefully crafted decision orders with respect to the graph topology as well as sufficient or necessary conditions for a graph to support such a good ordering. We first show that on a sparse graph of average constant degree with a random ordering asymptotic truth learning does not happen. We then show a rather modest sufficient condition to enable asymptotic truth learning. With the help of this condition we characterize graphs generated from the Erdös Rényi model and preferential attachment model. In an Erdös Rényi graph, unless the graph is super sparse (with O(n) edges) or super dense (nearly a complete graph), there exists a decision ordering that supports asymptotic truth learning. Similarly, any preferential attachment network with a constant number of edges per node can achieve asymptotic truth learning under a carefully designed ordering but not under either a random ordering nor the arrival order. We also evaluated a variant of the decision ordering on different network topologies and demonstrated clear effectiveness in improving truth learning over random orderings.more » « less
An official website of the United States government

