skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Finite-time Sample Complexity Analysis of Least Square Identifying Stochastic Switched Linear System
In this paper, we examine the high-probability finite-time theoretical guarantees of the least squares method for system identification of switched linear systems with process noise and without control input. We consider two scenarios: one in which the switching is i.i.d., and the other in which the switching is according to a Markov process. We provide concentration inequalities using a martingale-type argument to bound the identification error at each mode, and we use concentration lemmas for the switching signal. Our bound is in terms of state dimension, trajectory length, finite-time gramian, and properties of the switching signal distribution. We then provide simulations to demonstrate the accuracy of the identification. Additionally, we show that the empirical convergence rate is consistent with our theoretical bound.  more » « less
Award ID(s):
1932735
PAR ID:
10581402
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
ISBN:
978-4-9077-6480-7
Page Range / eLocation ID:
488 to 493
Subject(s) / Keyword(s):
Linear systems, Instruments, Process control, Switches, Markov processes, Control systems, Complexity theory
Format(s):
Medium: X
Location:
Tsu, Japan
Sponsoring Org:
National Science Foundation
More Like this
  1. We present an algorithm based on posterior sampling (aka Thompson sampling) that achieves near-optimal worst-case regret bounds when the underlying Markov decision process (MDP) is communicating with a finite, although unknown, diameter. Our main result is a high probability regret upper bound of [Formula: see text] for any communicating MDP with S states, A actions, and diameter D. Here, regret compares the total reward achieved by the algorithm to the total expected reward of an optimal infinite-horizon undiscounted average reward policy in time horizon T. This result closely matches the known lower bound of [Formula: see text]. Our techniques involve proving some novel results about the anti-concentration of Dirichlet distribution, which may be of independent interest. 
    more » « less
  2. null (Ed.)
    We present a data-driven framework for strategy synthesis for partially-known switched stochastic systems. The properties of the system are specified using linear temporal logic (LTL) over finite traces (LTLf), which is as expressive as LTL and enables interpretations over finite behaviors. The framework first learns the unknown dynamics via Gaussian process regression. Then, it builds a formal abstraction of the switched system in terms of an uncertain Markov model, namely an Interval Markov Decision Process (IMDP), by accounting for both the stochastic behavior of the system and the uncertainty in the learning step. Then, we synthesize a strategy on the resulting IMDP that maximizes the satisfaction probability of the LTLf specification and is robust against all the uncertainties in the abstraction. This strategy is then refined into a switching strategy for the original stochastic system. We show that this strategy is near-optimal and provide a bound on its distance (error) to the optimal strategy. We experimentally validate our framework on various case studies, including both linear and non-linear switched stochastic systems. 
    more » « less
  3. null (Ed.)
    Generative neural networks have been empirically found very promising in providing effective structural priors for compressed sensing, since they can be trained to span low-dimensional data manifolds in high-dimensional signal spaces. Despite the non-convexity of the resulting optimization problem, it has also been shown theoretically that, for neural networks with random Gaussian weights, a signal in the range of the network can be efficiently, approximately recovered from a few noisy measurements. However, a major bottleneck of these theoretical guarantees is a network expansivity condition: that each layer of the neural network must be larger than the previous by a logarithmic factor. Our main contribution is to break this strong expansivity assumption, showing that constant expansivity suffices to get efficient recovery algorithms, besides it also being information-theoretically necessary. To overcome the theoretical bottleneck in existing approaches we prove a novel uniform concentration theorem for random functions that might not be Lipschitz but satisfy a relaxed notion which we call "pseudo-Lipschitzness." Using this theorem we can show that a matrix concentration inequality known as the Weight Distribution Condition (WDC), which was previously only known to hold for Gaussian matrices with logarithmic aspect ratio, in fact holds for constant aspect ratios too. Since the WDC is a fundamental matrix concentration inequality in the heart of all existing theoretical guarantees on this problem, our tighter bound immediately yields improvements in all known results in the literature on compressed sensing with deep generative priors, including one-bit recovery, phase retrieval, low-rank matrix recovery, and more. 
    more » « less
  4. We investigate the robustness of stochastic approximation approaches against data poisoning attacks. We focus on two-layer neural networks with ReLU activation and show that under a specific notion of separability in the RKHS induced by the infinite-width network, training (finite-width) networks with stochastic gradient descent is robust against data poisoning attacks. Interestingly, we find that in addition to a lower bound on the width of the network, which is standard in the literature, we also require a distribution-dependent upper bound on the width for robust generalization. We provide extensive empirical evaluations that support and validate our theoretical results. 
    more » « less
  5. Motivated by the applications for low-delay communication networks, the finite-length analysis, or channel dispersion identification, of the multi-user channel is very important. Recent studies also incorporate the effects of feedback in point-to-point and common-message broadcast channels (BCs). However, with private messages and feedback, finite-length results for BCs are much more scarce. Though it is known that feedback can strictly enlarge the capacity, the ultimate feedback capacity regions remain unknown for even some classical channels including Gaussian BCs. In this work, we study the two-user broadcast packet erasure channel (PEC) with causal feedback, which is one of the cleanest feedback capacity results and the capacity region can be achieved by elegant linear network coding (LNC). We first derive a new finite-length outer bound for any LNCs and then accompanying inner bound by analyzing a three-phase LNC. For the outer-bound, we adopt a linear-space-based framework, which can successfully find the LNC capacity. However, naively applying this method in finite-length regime will result in a loose outer bound. Thus a new bounding technique based on carefully labelling each time slot according to the type of LNC transmitted is proposed. Simulation results show that the sum-rate gap between our inner and outer bounds is within 0.02 bits/channel use. Asymptotic analysis also shows that our bounds bracket the channel dispersion of LNC feedback capacity for broadcast PEC to within a factor of Q-l (E/2)/Q-l (E). 
    more » « less