skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Decay of convolved densities via Laplace transform
Upper pointwise bounds are considered for convolution of bounded densities in terms of the associated Laplace and Legendre transforms. Applications of these bounds are illustrated in the central limit theorem with respect to the Rényi divergence.  more » « less
Award ID(s):
2154001
PAR ID:
10495218
Author(s) / Creator(s):
Publisher / Repository:
Inst. Math. Statist.
Date Published:
Journal Name:
The Annals of Probability
Volume:
51
Issue:
5
ISSN:
0091-1798
Page Range / eLocation ID:
1603-1615
Subject(s) / Keyword(s):
MSC: 60E, 60F. Key words and phrases: Convolution, decay of densities.
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Bounds are established for integration matrices that arise in the convergence analysis of discrete approximations to optimal control problems based on orthogonal collocation. Weighted Euclidean norm bounds are derived for both Gauss and Radau integration matrices; these weighted norm bounds yield sup-norm bounds in the error analysis. 
    more » « less
  2. For any simple polygon P we compute the optimal upper and lower angle bounds for triangulating P with Steiner points, and show that these bounds can be attained (except in one special case). The sharp angle bounds for an N-gon are computable in time O(N), even though the number of triangles needed to attain these bounds has no bound in terms of N alone. In general, the sharp upper and lower bounds cannot both be attained by a single triangulation, although this does happen in some cases. For example, we show that any polygon with minimal interior angle θ has a triangulation with all angles in the interval I = [θ, 90°–min(36°, θ)/2], and for θ ≤ 36° both bounds are best possible. Surprisingly, we prove the optimal angle bounds for polygonal triangulations are the same as for triangular dissections. The proof of this verifies, in a stronger form, a 1984 conjecture of Gerver. 
    more » « less
  3. For any simple polygon P we compute the optimal upper and lower angle bounds for triangulating P with Steiner points, and show that these bounds can be attained (except in one special case). The sharp angle bounds for an N -gon are computable in time O(N ), even though the number of triangles needed to attain these bounds has no bound in terms of N alone. In general, the sharp upper and lower bounds cannot both be attained by a single triangulation, although this does happen in some cases. Surprisingly, we prove the optimal angle bounds for polygonal triangulations are the same as for triangular dissections. The proof of this verifies, in a stronger form, a 1984 conjecture of Gerver. 
    more » « less
  4. Ruiz, Francisco; Dy, Jennifer; van de Meent, Jan-Willem (Ed.)
    The softmax function is a ubiquitous component at the output of neural networks and increasingly in intermediate layers as well. This paper provides convex lower bounds and concave upper bounds on the softmax function, which are compatible with convex optimization formulations for characterizing neural networks and other ML models. We derive bounds using both a natural exponential-reciprocal decomposition of the softmax as well as an alternative decomposition in terms of the log-sum-exp function. The new bounds are provably and/or numerically tighter than linear bounds obtained in previous work on robustness verification of transformers. As illustrations of the utility of the bounds, we apply them to verification of transformers as well as of the robustness of predictive uncertainty estimates of deep ensembles. 
    more » « less
  5. Generalization error bounds are essential to understanding machine learning algorithms. This paper presents novel expected generalization error upper bounds based on the average joint distribution between the output hypothesis and each input training sample. Multiple generalization error upper bounds based on different information measures are provided, including Wasserstein distance, total variation distance, KL divergence, and Jensen-Shannon divergence. Due to the convexity of the information measures, the proposed bounds in terms of Wasserstein distance and total variation distance are shown to be tighter than their counterparts based on individual samples in the literature. An example is provided to demonstrate the tightness of the proposed generalization error bounds. 
    more » « less