Peak estimation bounds extreme values of a function of state along trajectories of a dynamical system. This paper focuses on extending peak estimation to continuous and discrete settings with time-independent and time-dependent uncertainty. Techniques from optimal control are used to incorporate uncertainty into an existing occupation measure-based peak estimation framework, which includes special consideration for handling switching-type (polytopic) uncertainties. The resulting infinite-dimensional linear programs can be solved approximately with Linear Matrix Inequalities arising from the moment-SOS hierarchy.
more »
« less
Concentration inequalities for sums of Markov-dependent random matrices
Abstract We give Hoeffding- and Bernstein-type concentration inequalities for the largest eigenvalue of sums of random matrices arising from a Markov chain. We consider time-dependent matrix-valued functions on a general state space, generalizing previous results that had only considered Hoeffding-type inequalities, and only for time-independent functions on a finite state space. In particular, we study a kind of non-commutative moment generating function, provide tight bounds on this object and use a method of Garg et al. (A matrix expander Chernoff bound. In Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2018, page 1102–1114, New York, NY, USA, 2018. Association for Computing Machinery) to turn this into tail bounds. Our proof proceeds spectrally, bounding the norm of a certain perturbed operator. In the process we make an interesting connection to dynamical systems and Banach space theory to prove a crucial result on the limiting behaviour of our moment generating function that may be of independent interest.
more »
« less
- Award ID(s):
- 2208340
- PAR ID:
- 10663163
- Publisher / Repository:
- Journal of the IMA
- Date Published:
- Journal Name:
- Information and Inference: A Journal of the IMA
- Volume:
- 13
- Issue:
- 4
- ISSN:
- 2049-8772
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We prove Fuk-Nagaev and Rosenthal-type inequalities for the sums of indepen- dent random matrices, focusing on the situation when the norms of the matrices possess finite moments of only low orders. Our bounds depend on the “intrinsic” dimensional char- acteristics such as the effective rank, as opposed to the dimension of the ambient space. We illustrate the advantages of such results in several applications, including new moment inequalities for the sample covariance operators of heavy-tailed distributions. Moreover, we demonstrate that our techniques yield sharpened versions of the moment inequalities for empirical processes.more » « less
-
We consider the problem of estimating the mean of a sequence of random elements f (θ, X_1) , . . . , f (θ, X_n) where f is a fixed scalar function, S = (X_1, . . . , X_n) are independent random variables, and θ is a possibly S-dependent parameter. An example of such a problem would be to estimate the generalization error of a neural network trained on n examples where f is a loss function. Classically, this problem is approached through concentration inequalities holding uniformly over compact parameter sets of functions f , for example as in Rademacher or VC type analysis. However, in many problems, such inequalities often yield numerically vacuous estimates. Recently, the PAC-Bayes framework has been proposed as a better alternative for this class of problems for its ability to often give numerically non-vacuous bounds. In this paper, we show that we can do even better: we show how to refine the proof strategy of the PAC-Bayes bounds and achieve even tighter guarantees. Our approach is based on the coin-betting framework that derives the numerically tightest known time-uniform concentration inequalities from the regret guarantees of online gambling algorithms. In particular, we derive the first PAC-Bayes concentration inequality based on the coin-betting approach that holds simultaneously for all sample sizes. We demonstrate its tightness showing that by relaxing it we obtain a number of previous results in a closed form including Bernoulli-KL and empirical Bernstein inequalities. Finally, we propose an efficient algorithm to numerically calculate confidence sequences from our bound, which often generates nonvacuous confidence bounds even with one sample, unlike the state-of-the-art PAC-Bayes bounds.more » « less
-
Cumulative memory—the sum of space used per step over the duration of a computation—is a fine-grained measure of time-space complexity that was introduced to analyze cryptographic applications like password hashing. It is a more accurate cost measure for algorithms that have infrequent spikes in memory usage and are run in environments such as cloud computing that allow dynamic allocation and de-allocation of resources during execution, or when many instances of an algorithm are interleaved in parallel. We prove the first lower bounds on cumulative memory complexity for both sequential classical computation and quantum circuits. Moreover, we develop general paradigms for bounding cumulative memory complexity inspired by the standard paradigms for proving time-space tradeoff lower bounds that can only lower bound the maximum space used during an execution. The resulting lower bounds on cumulative memory that we obtain are just as strong as the best time-space tradeoff lower bounds, which are very often known to be tight. Although previous results for pebbling and random oracle models have yielded time-space tradeoff lower bounds larger than the cumulative memory complexity, our results show that in general computational models such separations cannot follow from known lower bound techniques and are not true for many functions. Among many possible applications of our general methods, we show that any classical sorting algorithm with success probability at least 1/poly(n) requires cumulative memory\(\tilde{\Omega }(n^2) \), any classical matrix multiplication algorithm requires cumulative memoryΩ(n6/T), any quantum sorting circuit requires cumulative memoryΩ(n3/T), and any quantum circuit that findskdisjoint collisions in a random function requires cumulative memoryΩ(k3n/T2).more » « less
-
Abstract We prove two compactness results for function spaces with finite Dirichlet energy of half‐space nonlocal gradients. In each of these results, we provide sufficient conditions on a sequence of kernel functions that guarantee the asymptotic compact embedding of the associated nonlocal function spaces into the class of square‐integrable functions. Moreover, we will demonstrate that the sequence of nonlocal function spaces converges in an appropriate sense to a limiting function space. As an application, we prove uniform Poincaré‐type inequalities for sequence of half‐space gradient operators. We also apply the compactness result to demonstrate the convergence of appropriately parameterized nonlocal heterogeneous anisotropic diffusion problems. We will construct asymptotically compatible schemes for these type of problems. Another application concerns the convergence and robust discretization of a nonlocal optimal control problem.more » « less
An official website of the United States government

