skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: On the Hardness of Learning to Stabilize Linear Systems
Inspired by the work of Tsiamis et al. [1], in this paper we study the statistical hardness of learning to stabilize linear time-invariant systems. Hardness is measured by the number of samples required to achieve a learning task with a given probability. The work in [1] shows that there exist system classes that are hard to learn to stabilize with the core reason being the hardness of identification. Here we present a class of systems that can be easy to identify, thanks to a non-degenerate noise process that excites all modes, but the sample complexity of stabilization still increases exponentially with the system dimension. We tie this result to the hardness of co-stabilizability for this class of systems using ideas from robust control.  more » « less
Award ID(s):
1931982
PAR ID:
10480624
Author(s) / Creator(s):
; ; ; ;
Publisher / Repository:
IEEE Conference on Decision and Control (CDC)
Date Published:
Journal Name:
Proceedings of the IEEE Conference on Decision Control
ISSN:
0743-1546
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Under Review for COLT 2024 (Ed.)
    In the well-studied agnostic model of learning, the goal of a learner– given examples from an arbitrary joint distribution on Rd ⇥ {±1}– is to output a hypothesis that is competitive (to within ✏) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes in this model, we introduce a smoothed analysis framework where we require a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Perhaps surprisingly, our analysis also yields new results for traditional non-smoothed frame- works such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of k-halfspaces in time kpoly( log k ✏ ) where is the margin parameter. Before our work, the best-known runtime was exponential in k (Arriaga and Vempala, 1999a). 
    more » « less
  2. We study the computational complexity of estimating local observables for Gibbs distributions. A simple combinatorial example is the average size of an independent set in a graph. A recent work of Galanis et al (2021) established NP-hardness of approximating the average size of an independent set utilizing hardness of the corresponding optimization problem and the related phase transition behavior. We instead consider settings where the underlying optimization problem is easily solvable. Our main contribution is to classify the complexity of approximating a wide class of observables via a generic reduction from approximate counting to the problem of estimating local observables. The key idea is to use the observables to interpolate the counting problem. Using this new approach, we are able to study observables on bipartite graphs where the underlying optimization problem is easy but the counting problem is believed to be hard. The most-well studied class of graphs that was excluded from previous hardness results were bipartite graphs. We establish hardness for estimating the average size of the independent set in bipartite graphs of maximum degree 6; more generally, we show tight hardness results for general vertex-edge observables for antiferromagnetic 2-spin systems on bipartite graphs. Our techniques go beyond 2-spin systems, and for the ferromagnetic Potts model we establish hardness of approximating the number of monochromatic edges in the same region as known hardness of approximate counting results. 
    more » « less
  3. In traditional models of supervised learning, the goal of a learner -- given examples from an arbitrary joint distribution on ℝd×{±1} -- is to output a hypothesis that is competitive (to within ϵ) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes, we introduce a smoothed-analysis framework that requires a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of k-halfspaces in time kpoly(logkϵγ) where γ is the margin parameter. Before our work, the best-known runtime was exponential in k (Arriaga and Vempala, 1999). 
    more » « less
  4. null (Ed.)
    Graphical models are powerful tools for modeling high-dimensional data, but learning graphical models in the presence of latent variables is well-known to be difficult. In this work we give new results for learning Restricted Boltzmann Machines, probably the most well-studied class of latent variable models. Our results are based on new connections to learning two-layer neural networks under ℓ∞ bounded input; for both problems, we give nearly optimal results under the conjectured hardness of sparse parity with noise. Using the connection between RBMs and feedforward networks, we also initiate the theoretical study of supervised RBMs [Hinton, 2012], a version of neural-network learning that couples distributional assumptions induced from the underlying graphical model with the architecture of the unknown function class. We then give an algorithm for learning a natural class of supervised RBMs with better runtime than what is possible for its related class of networks without distributional assumptions. 
    more » « less
  5. In the well-studied agnostic model of learning, the goal of a learner– given examples from an arbitrary joint distribution – is to output a hypothesis that is competitive (to within 𝜖) of the best fitting concept from some class. In order to escape strong hardness results for learning even simple concept classes in this model, we introduce a smoothed analysis framework where we require a learner to compete only with the best classifier that is robust to small random Gaussian perturbation. This subtle change allows us to give a wide array of learning results for any concept that (1) depends on a low-dimensional subspace (aka multi-index model) and (2) has a bounded Gaussian surface area. This class includes functions of halfspaces and (low-dimensional) convex sets, cases that are only known to be learnable in non-smoothed settings with respect to highly structured distributions such as Gaussians. Perhaps surprisingly, our analysis also yields new results for traditional non-smoothed frameworks such as learning with margin. In particular, we obtain the first algorithm for agnostically learning intersections of 𝑘 -halfspaces in time 𝑘\poly(log𝑘𝜖𝛾) where 𝛾 is the margin parameter. Before our work, the best-known runtime was exponential in 𝑘 (Arriaga and Vempala, 1999). 
    more » « less