Abstract For each prime π and each positive integer π, we construct the first examples of second countable, topologically simple π-adic Lie groups of dimension π whose Lie algebras are abelian.This answers several questions of GlΓΆckner and CapraceβMonod.The proof relies on a generalization of small cancellation methods that applies to central extensions of acylindrically hyperbolic groups.
more »
« less
This content will become publicly available on November 21, 2025
On the Total Perimeter of Pairwise Disjoint Convex Bodies
In this note we introduce a pseudometric on closed convex planar curves based on distances between normal lines and show its basic properties. Then we use this pseudometric to give a shorter proof of the theorem by Pinchasi that the sum of perimeters of π convex planar bodies with disjoint interiors contained in a convex body of perimeter π and diameter π is not greater than π + 2(π β 1)π.
more »
« less
- Award ID(s):
- 2054536
- PAR ID:
- 10623597
- Publisher / Repository:
- AKJournals
- Date Published:
- Journal Name:
- Studia Scientiarum Mathematicarum Hungarica
- Volume:
- 61
- Issue:
- 3
- ISSN:
- 0081-6906
- Page Range / eLocation ID:
- 237-250
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper studies differentially private stochastic convex optimization (DP-SCO) in the presence of heavy-tailed gradients, where only a π kth-moment bound on sample Lipschitz constants is assumed, instead of a uniform bound. The authors propose a reduction-based approach that achieves the first near-optimal error rates (up to logarithmic factors) in this setting. Specifically, under ( π , πΏ ) (Ο΅,Ξ΄)-approximate differential privacy, they achieve an error bound of πΊ 2 π + πΊ π β ( π π π ) 1 β 1 π , n β G 2 β β +G k β β ( nΟ΅ d β β ) 1β k 1 β , up to a mild polylogarithmic factor in 1 πΏ Ξ΄ 1 β , where πΊ 2 G 2 β and πΊ π G k β are the 2nd and π kth moment bounds on sample Lipschitz constants. This nearly matches the lower bound established by Lowy and Razaviyayn (2023). Beyond the basic result, the authors introduce a suite of private algorithms that further improve performance under additional assumptions: an optimal algorithm under a known-Lipschitz constant, a near-linear time algorithm for smooth functions, and an optimal linear-time algorithm for smooth generalized linear models.more » « less
-
Traditional models of supervised learning require a learner, given examples from an arbitrary joint distribution on π π Γ { Β± 1 } R d Γ{Β±1}, to output a hypothesis that competes (to within π Ο΅) with the best fitting concept from a class. To overcome hardness results for learning even simple concept classes, this paper introduces a smoothed-analysis framework that only requires competition with the best classifier robust to small random Gaussian perturbations. This subtle shift enables a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (multi-index model) and (2) has bounded Gaussian surface area. This class includes functions of halfspaces and low-dimensional convex sets, which are only known to be learnable in non-smoothed settings with respect to highly structured distributions like Gaussians. The analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, the authors present the first algorithm for agnostically learning intersections of π k-halfspaces in time π β poly ( log β‘ π , π , πΎ ) kβ poly(logk,Ο΅,Ξ³), where πΎ Ξ³ is the margin parameter. Previously, the best-known runtime was exponential in π k (Arriaga and Vempala, 1999).more » « less
-
We prove that some exact geometric pattern matching problems reduce in linear time to π-SUM when the pattern has a fixed size π. This holds in the real RAM model for searching for a similar copy of a set of πβ₯3 points within a set of n points in the plane, and for searching for an affine image of a set of πβ₯π+2 points within a set of n points in d-space. As corollaries, we obtain improved real RAM algorithms and decision trees for the two problems. In particular, they can be solved by algebraic decision trees of near-linear height.more » « less
-
Random dimensionality reduction is a versatile tool for speeding up algorithms for high-dimensional problems. We study its application to two clustering problems: the facility location problem, and the single-linkage hierarchical clustering problem, which is equivalent to computing the minimum spanning tree. We show that if we project the input pointset π onto a random π=π(ππ)-dimensional subspace (where ππ is the doubling dimension of π), then the optimum facility location cost in the projected space approximates the original cost up to a constant factor. We show an analogous statement for minimum spanning tree, but with the dimension π having an extra loglogπ term and the approximation factor being arbitrarily close to 1. Furthermore, we extend these results to approximating solutions instead of just their costs. Lastly, we provide experimental results to validate the quality of solutions and the speedup due to the dimensionality reduction. Unlike several previous papers studying this approach in the context of π-means and π-medians, our dimension bound does not depend on the number of clusters but only on the intrinsic dimensionality of π.more » « less
An official website of the United States government
