skip to main content


Title: Efficient Backward Reachability Using the Minkowski Difference of Constrained Zonotopes
Backward reachability analysis is essential to synthesizing controllers that ensure the correctness of closed-loop systems. This paper is concerned with developing scalable algorithms that under-approximate the backward reachable sets, for discrete-time uncertain linear and nonlinear systems. Our algorithm sequentially linearizes the dynamics, and uses constrained zonotopes for set representation and computation. The main technical ingredient of our algorithm is an efficient way to under-approximate the Minkowski difference between a constrained zonotopic minuend and a zonotopic subtrahend, which consists of all possible values of the uncertainties and the linearization error. This Minkowski difference needs to be represented as a constrained zonotope to enable subsequent computation, but, as we show, it is impossible to find a polynomial-size representation for it in polynomial time. Our algorithm finds a polynomial-size under-approximation in polynomial time. We further analyze the conservatism of this under-approximation technique, and show that it is exact under some conditions. Based on the developed Minkowski difference technique, we detail two backward reachable set computation algorithms to control the linearization error and incorporate nonconvex state constraints. Several examples illustrate the effectiveness of our algorithms.  more » « less
Award ID(s):
1918123
NSF-PAR ID:
10366279
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
IEEE transactions on computeraided design of integrated circuits and systems
ISSN:
1937-4151
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Zonotopes are widely used for over-approximating forward reachable sets of uncertain linear systems for verification purposes. In this paper, we use zonotopes to achieve more scalable algorithms that under-approximate backward reachable sets of uncertain linear systems for control design. The main difference is that the backward reachability analysis is a twoplayer game and involves Minkowski difference operations, but zonotopes are not closed under such operations. We underapproximate this Minkowski difference with a zonotope, which can be obtained by solving a linear optimization problem. We further develop an efficient zonotope order reduction technique to bound the complexity of the obtained zonotopic underapproximations. The proposed approach is evaluated against existing approaches using randomly generated instances and illustrated with several examples. 
    more » « less
  2. INTRODUCTION Solving quantum many-body problems, such as finding ground states of quantum systems, has far-reaching consequences for physics, materials science, and chemistry. Classical computers have facilitated many profound advances in science and technology, but they often struggle to solve such problems. Scalable, fault-tolerant quantum computers will be able to solve a broad array of quantum problems but are unlikely to be available for years to come. Meanwhile, how can we best exploit our powerful classical computers to advance our understanding of complex quantum systems? Recently, classical machine learning (ML) techniques have been adapted to investigate problems in quantum many-body physics. So far, these approaches are mostly heuristic, reflecting the general paucity of rigorous theory in ML. Although they have been shown to be effective in some intermediate-size experiments, these methods are generally not backed by convincing theoretical arguments to ensure good performance. RATIONALE A central question is whether classical ML algorithms can provably outperform non-ML algorithms in challenging quantum many-body problems. We provide a concrete answer by devising and analyzing classical ML algorithms for predicting the properties of ground states of quantum systems. We prove that these ML algorithms can efficiently and accurately predict ground-state properties of gapped local Hamiltonians, after learning from data obtained by measuring other ground states in the same quantum phase of matter. Furthermore, under a widely accepted complexity-theoretic conjecture, we prove that no efficient classical algorithm that does not learn from data can achieve the same prediction guarantee. By generalizing from experimental data, ML algorithms can solve quantum many-body problems that could not be solved efficiently without access to experimental data. RESULTS We consider a family of gapped local quantum Hamiltonians, where the Hamiltonian H ( x ) depends smoothly on m parameters (denoted by x ). The ML algorithm learns from a set of training data consisting of sampled values of x , each accompanied by a classical representation of the ground state of H ( x ). These training data could be obtained from either classical simulations or quantum experiments. During the prediction phase, the ML algorithm predicts a classical representation of ground states for Hamiltonians different from those in the training data; ground-state properties can then be estimated using the predicted classical representation. Specifically, our classical ML algorithm predicts expectation values of products of local observables in the ground state, with a small error when averaged over the value of x . The run time of the algorithm and the amount of training data required both scale polynomially in m and linearly in the size of the quantum system. Our proof of this result builds on recent developments in quantum information theory, computational learning theory, and condensed matter theory. Furthermore, under the widely accepted conjecture that nondeterministic polynomial-time (NP)–complete problems cannot be solved in randomized polynomial time, we prove that no polynomial-time classical algorithm that does not learn from data can match the prediction performance achieved by the ML algorithm. In a related contribution using similar proof techniques, we show that classical ML algorithms can efficiently learn how to classify quantum phases of matter. In this scenario, the training data consist of classical representations of quantum states, where each state carries a label indicating whether it belongs to phase A or phase B . The ML algorithm then predicts the phase label for quantum states that were not encountered during training. The classical ML algorithm not only classifies phases accurately, but also constructs an explicit classifying function. Numerical experiments verify that our proposed ML algorithms work well in a variety of scenarios, including Rydberg atom systems, two-dimensional random Heisenberg models, symmetry-protected topological phases, and topologically ordered phases. CONCLUSION We have rigorously established that classical ML algorithms, informed by data collected in physical experiments, can effectively address some quantum many-body problems. These rigorous results boost our hopes that classical ML trained on experimental data can solve practical problems in chemistry and materials science that would be too hard to solve using classical processing alone. Our arguments build on the concept of a succinct classical representation of quantum states derived from randomized Pauli measurements. Although some quantum devices lack the local control needed to perform such measurements, we expect that other classical representations could be exploited by classical ML with similarly powerful results. How can we make use of accessible measurement data to predict properties reliably? Answering such questions will expand the reach of near-term quantum platforms. Classical algorithms for quantum many-body problems. Classical ML algorithms learn from training data, obtained from either classical simulations or quantum experiments. Then, the ML algorithm produces a classical representation for the ground state of a physical system that was not encountered during training. Classical algorithms that do not learn from data may require substantially longer computation time to achieve the same task. 
    more » « less
  3. We study equilibrium computation with extensive-form correlation in two-player turn-taking stochastic games. Our main results are two-fold: (1) We give an algorithm for computing a Stackelberg extensive-form correlated equilibrium (SEFCE), which runs in time polynomial in the size of the game, as well as the number of bits required to encode each input number. (2) We give an efficient algorithm for approximately computing an optimal extensive-form correlated equilibrium (EFCE) up to machine precision, i.e., the algorithm achieves approximation error πœ€ in time polynomial in the size of the game, as well as log(1/πœ€). Our algorithm for SEFCE is the first polynomial-time algorithm for equilibrium computation with com- mitment in such a general class of stochastic games. Existing algorithms for SEFCE typically make stronger assumptions such as no chance moves, and are designed for extensive-form games in the less succinct tree form. Our algorithm for approximately optimal EFCE is, to our knowledge, the first algorithm that achieves 3 desiderata simultaneously: approximate optimality, polylogarithmic dependency on the approximation error, and compatibility with stochastic games in the more succinct graph form. Existing algorithms achieve at most 2 of these desiderata, often also relying on additional technical assumptions. 
    more » « less
  4. Hazay, Carmit ; Stam, Martijn (Ed.)
    We study the computational problem of finding a shortest non-zero vector in a rotation of ℀𝑛 , which we call β„€ SVP. It has been a long-standing open problem to determine if a polynomial-time algorithm for β„€ SVP exists, and there is by now a beautiful line of work showing how to solve it efficiently in certain very special cases. However, despite all of this work, the fastest known algorithm that is proven to solve β„€ SVP is still simply the fastest known algorithm for solving SVP (i.e., the problem of finding shortest non-zero vectors in arbitrary lattices), which runs in 2𝑛+π‘œ(𝑛) time. We therefore set aside the (perhaps impossible) goal of finding an efficient algorithm for β„€ SVP and instead ask what else we can say about the problem. E.g., can we find any non-trivial speedup over the best known SVP algorithm? And, if β„€ SVP actually is hard, then what consequences would follow? Our results are as follows. We show that β„€ SVP is in a certain sense strictly easier than SVP on arbitrary lattices. In particular, we show how to reduce β„€ SVP to an approximate version of SVP in the same dimension (in fact, even to approximate unique SVP, for any constant approximation factor). Such a reduction seems very unlikely to work for SVP itself, so we view this as a qualitative separation of β„€ SVP from SVP. As a consequence of this reduction, we obtain a 2𝑛/2+π‘œ(𝑛) -time algorithm for β„€ SVP, i.e., the first non-trivial speedup over the best known algorithm for SVP on general lattices. (In fact, this reduction works for a more general class of latticesβ€”semi-stable lattices with not-too-large πœ†1 .) We show a simple public-key encryption scheme that is secure if (an appropriate variant of) β„€ SVP is actually hard. Specifically, our scheme is secure if it is difficult to distinguish (in the worst case) a rotation of ℀𝑛 from either a lattice with all non-zero vectors longer than 𝑛/logπ‘›β€Ύβ€Ύβ€Ύβ€Ύβ€Ύβ€Ύβ€Ύβˆš or a lattice with smoothing parameter significantly smaller than the smoothing parameter of ℀𝑛 . The latter result has an interesting qualitative connection with reverse Minkowski theorems, which in some sense say that β€œβ„€π‘› has the largest smoothing parameter.” We show a distribution of bases 𝐁 for rotations of ℀𝑛 such that, if β„€ SVP is hard for any input basis, then β„€ SVP is hard on input 𝐁 . This gives a satisfying theoretical resolution to the problem of sampling hard bases for ℀𝑛 , which was studied by Blanks and Miller [9]. This worst-case to average-case reduction is also crucially used in the analysis of our encryption scheme. (In recent independent work that appeared as a preprint before this work, Ducas and van Woerden showed essentially the same thing for general lattices [15], and they also used this to analyze the security of a public-key encryption scheme. Similar ideas also appeared in [5, 11, 20] in different contexts.) We perform experiments to determine how practical basis reduction performs on bases of ℀𝑛 that are generated in different ways and how heuristic sieving algorithms perform on ℀𝑛 . Our basis reduction experiments complement and add to those performed by Blanks and Miller, as we work with a larger class of algorithms (i.e., larger block sizes) and study the β€œprovably hard” distribution of bases described above. Our sieving experiments confirm that heuristic sieving algorithms perform as expected on ℀𝑛 . 
    more » « less
  5. Real-time decision making in IoT applications relies upon space-efficient evaluation of queries over streaming data. To model the uncertainty in the classification of data being processed, we consider the model of probabilistic strings --- sequences of discrete probability distributions over a finite set of events, and initiate the study of space complexity of streaming computation for different classes of queries over such probabilistic strings. We first consider the problem of computing the probability that a word, sampled from the distribution defined by the probabilistic string read so far, is accepted by a given deterministic finite automaton. We show that this regular pattern matching problem can be solved using space that is only poly-logarithmic in the string length (and polynomial in the size of the DFA) if we are allowed a multiplicative approximation error. Then we show how to generalize this result to quantitative queries specified by additive cost register automata --- these are automata that map strings to numerical values using finite control and registers that get updated using linear transformations. Finally, we consider the case when updates in such an automaton involve tests, and in particular, when there is a counter variable that can be either incremented or decremented but decrements only apply when the counter value is non-zero. In this case, the desired answer depends on the probability distribution over the set of possible counter values that can range from 0 to n for a string of length n. Under a mild assumption, namely probabilities of the individual events are bounded away from 0 and 1, we show that there is an algorithm that can compute all n entries of this probability distribution vector to within additive 1/poly(n) error using space that is only Γ•(n). In establishing these results, we introduce several new technical ideas that may prove useful for designing space-efficient algorithms for other query models over probabilistic strings. 
    more » « less