skip to main content


Title: Robust Empirical Risk Minimization with Tolerance
Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need. In this work we study the fundamental paradigm of (robust) empirical risk minimization (RERM), a simple process in which the learner outputs any hypothesis minimizing its training error. RERM famously fails to robustly learn VC classes (Montasser et al., 2019a), a bound we show extends even to `nice' settings such as (bounded) halfspaces. As such, we study a recent relaxation of the robust model called tolerant robust learning (Ashtiani et al., 2022) where the output classifier is compared to the best achievable error over slightly larger perturbation sets. We show that under geometric niceness conditions, a natural tolerant variant of RERM is indeed sufficient for γ-tolerant robust learning VC classes over ℝd, and requires only Õ (VC(H)dlogDγδϵ2) samples for robustness regions of (maximum) diameter D.  more » « less
Award ID(s):
1804829
PAR ID:
10475789
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
Algorithmic Learning Theory
Date Published:
Journal Name:
Algorithmic Learning Theory
Format(s):
Medium: X
Location:
Singapore
Sponsoring Org:
National Science Foundation
More Like this
  1. A private learner is trained on a sample of labeled points and generates a hypothesis that can be used for predicting the labels of newly sampled points while protecting the privacy of the training set [Kasiviswannathan et al., FOCS 2008]. Past research uncovered that private learners may need to exhibit significantly higher sample complexity than non-private learners as is the case of learning of one-dimensional threshold functions [Bun et al., FOCS 2015, Alon et al., STOC 2019]. We explore prediction as an alternative to learning. A predictor answers a stream of classification queries instead of outputting a hypothesis. Earlier work has considered a private prediction model with a single classification query [Dwork and Feldman, COLT 2018]. We observe that when answering a stream of queries, a predictor must modify the hypothesis it uses over time, and in a manner that cannot rely solely on the training set. We introduce private everlasting prediction taking into account the privacy of both the training set and the (adaptively chosen) queries made to the predictor. We then present a generic construction of private everlasting predictors in the PAC model. The sample complexity of the initial training sample in our construction is quadratic (up to polylog factors) in the VC dimension of the concept class. Our construction allows prediction for all concept classes with finite VC dimension, and in particular threshold functions over infinite domains, for which (traditional) private learning is known to be impossible. 
    more » « less
  2. Practical and pervasive needs for robustness and privacy in algorithms have inspired the design of online adversarial and differentially private learning algorithms. The primary quantity that characterizes learnability in these settings is the Littlestone dimension of the class of hypotheses [Alon et al., 2019, Ben-David et al., 2009]. This characterization is often interpreted as an impossibility result because classes such as linear thresholds and neural networks have infinite Littlestone dimension. In this paper, we apply the framework of smoothed analysis [Spielman and Teng, 2004], in which adversarially chosen inputs are perturbed slightly by nature. We show that fundamentally stronger regret and error guarantees are possible with smoothed adversaries than with worst-case adversaries. In particular, we obtain regret and privacy error bounds that depend only on the VC dimension and the bracketing number of a hypothesis class, and on the magnitudes of the perturbations. 
    more » « less
  3. Lawrence, N (Ed.)
    This paper addresses the end-to-end sample complexity bound for learning in closed loop the state estimator-based robust H2 controller for an unknown (possibly unstable) Linear Time Invariant (LTI) system, when given a fixed state-feedback gain. We build on the results from Ding et al. (1994) to bridge the gap between the parameterization of all state-estimators and the celebrated Youla parameterization. Refitting the expression of the relevant closed loop allows for the optimal linear observer problem given a fixed state feedback gain to be recast as a convex problem in the Youla parameter. The robust synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant, such that a min-max optimization problem is formulated for the robust H2 controller via an observer approach. The closed-loop identification scheme follows Zhang et al. (2021), where the nominal model of the true plant is identified by constructing a Hankel-like matrix from a single time-series of noisy, finite length input-output data by using the ordinary least squares algorithm from Sarkar et al. (2020). Finally, a H∞ bound on the estimated model error is provided, as the robust synthesis procedure requires bounded additive uncertainty on the coprime factors of the model. Reference Zhang et al. (2022b) is the extended version of this paper. 
    more » « less
  4. null (Ed.)
    We study the problems of identity and closeness testing of n-dimensional product distributions. Prior works of Canonne et al. (2017) and Daskalakis and Pan (2017) have established tight sample complexity bounds for non-tolerant testing over a binary alphabet: given two product distributions P and Q over a binary alphabet, distinguish between the cases P = Q and dTV(P;Q) > epsilon . We build on this prior work to give a more comprehensive map of the complexity of testing of product distributions by investigating tolerant testing with respect to several natural distance measures and over an arbitrary alphabet. Our study gives a fine-grained understanding of how the sample complexity of tolerant testing varies with the distance measures for product distributions. In addition, we also extend one of our upper bounds on product distributions to bounded-degree Bayes nets. 
    more » « less
  5. In the problem of learning a class ratio from unlabeled data, which we call CR learning, the training data is unlabeled, and only the ratios, or proportions, of examples receiving each label are given. The goal is to learn a hypothesis that predicts the proportions of labels on the distribution underlying the sample. This model of learning is applicable to a wide variety of settings, including predicting the number of votes for candidates in political elections from polls. In this paper, we formally define this class and resolve foundational questions regarding the computational complexity of CR learning and characterize its relationship to PAC learning. Among our results, we show, perhaps surprisingly, that for finite VC classes what can be efficiently CR learned is a strict subset of what can be learned efficiently in PAC, under standard complexity assumptions. We also show that there exist classes of functions whose CR learnability is independent of ZFC, the standard set theoretic axioms. This implies that CR learning cannot be easily characterized (like PAC by VC dimension). 
    more » « less