skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Applications of Littlestone Dimension to Query Learning and to Compression
In this paper we give several applications of Littlestone dimension. The first is to the model of [Angluin and Dohrn, 2017], where we extend their results for learning by equivalence queries with random counterexamples. Second, we extend that model to infinite concept classes with an additional source of randomness. Third, we give improved results on the relationship of Littlestone dimension to classes with extended d-compression schemes, proving the analog of a conjecture of [Floyd and Warmuth, 1995] for Littlestone dimension.  more » « less
Award ID(s):
1945251
PAR ID:
10600292
Author(s) / Creator(s):
; ;
Editor(s):
Kráľovič, Rastislav; Kučera, Antonín
Publisher / Repository:
Schloss Dagstuhl – Leibniz-Zentrum für Informatik
Date Published:
Volume:
306
ISSN:
1868-8969
ISBN:
978-3-95977-335-5
Page Range / eLocation ID:
42:1-42:10
Subject(s) / Keyword(s):
compression scheme query learning random queries Littlestone dimension Theory of computation → Query learning Theory of computation → Randomness, geometry and discrete structures Mathematics of computing → Combinatoric problems
Format(s):
Medium: X Size: 10 pages; 672376 bytes Other: application/pdf
Size(s):
10 pages 672376 bytes
Right(s):
Creative Commons Attribution 4.0 International license; info:eu-repo/semantics/openAccess
Sponsoring Org:
National Science Foundation
More Like this
  1. We use algorithmic methods from online learning to explore some important objects at the intersection of model theory and combinatorics, and find natural ways that algorithmic methods can detect and explain (and improve our understanding of) stable structure in the sense of model theory. The main theorem deals with existence of ϵ<#comment/> \epsilon -excellent sets (which are key to the Stable Regularity Lemma, a theorem characterizing the appearance of irregular pairs in Szemerédi’s celebrated Regularity Lemma). We prove that ϵ<#comment/> \epsilon -excellent sets exist for any ϵ<#comment/> > 1 2 \epsilon > \frac {1}{2} in k k -edge stable graphs in the sense of model theory (equivalently, Littlestone classes); earlier proofs had given this only for ϵ<#comment/> > 1 / 2 2 k \epsilon > 1/{2^{2^k}} or so. We give two proofs: the first uses regret bounds from online learning, the second uses Boolean closure properties of Littlestone classes and sampling. We also give a version of the dynamic Sauer-Shelah-Perles lemma appropriate to this setting, related to definability of types. We conclude by characterizing stable/Littlestone classes as those supporting a certain abstract notion of majority: the proof shows that the two distinct, natural notions of majority, arising from measure and from dimension, densely often coincide. 
    more » « less
  2. Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Alon et al., 2019, Ben-David et al., 2009]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations. 
    more » « less
  3. null (Ed.)
    We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm. This answers an open question of Alon et al. (STOC 2019) who proved the converse statement (this question was also asked by Neel et al. (FOCS 2019)). Together these two results yield an equivalence between online learnability and private PAC learnability. We introduce a new notion of algorithmic stability called “global stability” which is essential to our proof and may be of independent interest. We also discuss an application of our results to boosting the privacy and accuracy parameters of differentially-private learners. 
    more » « less
  4. Let H be a binary-labeled concept class. We prove that H can be PAC learned by an (approximate) differentially private algorithm if and only if it has a finite Littlestone dimension. This implies a qualitative equivalence between online learnability and private PAC learnability. 
    more » « less
  5. Abstract We extend our earlier mean field approximation of the Bolker–Pacala model of population dynamics by dividing the population into N classes, using a mean field approximation for each class but also allowing migration between classes as well as possibly suppressive influence of the population of one class over another class. For {N\geq 2} , we obtain one symmetric nontrivial equilibrium for the system and give global limit theorems. For {N=2} , we calculate all equilibrium solutions, which, under additional conditions, include multiple nontrivial equilibria. Lastly, we prove geometric ergodicity regardless of the number of classes when there is no population suppression across the classes. 
    more » « less