skip to main content

Search for: All records

Award ID contains: 1553873

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. We present a method for contraction-based feedback motion planning of locally incrementally exponentially stabilizable systems with unknown dynamics that provides probabilistic safety and reachability guarantees. Given a dynamics dataset, our method learns a deep control-affine approximation of the dynamics. To find a trusted domain where this model can be used for planning, we obtain an estimate of the Lipschitz constant of the model error, which is valid with a given probability, in a region around the training data, providing a local, spatially-varying model error bound. We derive a trajectory tracking error bound for a contraction based controller that is subjected to this model error, and then learn a controller that optimizes this tracking bound. With a given probability, we verify the correctness of the controller and tracking error bound in the trusted domain. We then use the trajectory error bound together with the trusted domain to guide a sampling-based planner to return trajectories that can be robustly tracked in execution. We show results on a 4D car, a 6D quadrotor, and a 22D deformable object manipulation task, showing our method plans safely with learned models of highdimensional underactuated systems, while baselines that plan without considering the tracking error bound ormore »the trusted domain can fail to stabilize the system and become unsafe.« less
  2. In this paper, we first propose a method that can efficiently compute the maximal robust controlled invariant set for discrete-time linear systems with pure delay in input. The key to this method is to construct an auxiliary linear system (without delay) with the same state-space dimension of the original system in consideration and to relate the maximal invariant set of the auxiliary system to that of the original system. When the system is subject to disturbances, guaranteeing safety is harder for systems with input delays. Ability to incorporate any additional information about the disturbance becomes more critical in these cases. Motivated by this observation, in the second part of the paper, we generalize the proposed method to take into account additional preview information on the disturbances, while maintaining computational efficiency. Compared with the naive approach of constructing a higher dimensional system by appending the state-space with the delayed inputs and previewed disturbances, the proposed approach is demonstrated to scale much better with the increasing delay time.
  3. We present a method for learning to satisfy uncertain constraints fromdemonstrations. Our method uses robust optimization to obtain a belief over thepotentially infinite set of possible constraints consistent with the demonstrations,and then uses this belief to plan trajectories that trade off performance with sat-isfying the possible constraints. We use these trajectories in a closed-loop policythat executes and replans using belief updates, which incorporate data gatheredduring execution. We derive guarantees on the accuracy of our constraint beliefand probabilistic guarantees on plan safety. We present results on a 7-DOF armand 12D quadrotor, showing our method can learn to satisfy high-dimensional (upto 30D) uncertain constraints, and outperforms baselines in safety and efficiency.
  4. We consider abstraction-based design of output-feedback controllers for dynamical systemswith a finite set of inputs and outputs against specifications in linear-time temporal logic. The usual procedure for abstraction-based controller design (ABCD) first constructs a finite-state abstraction of the underlying dynamical system, and second, uses reactive synthesis techniques to compute an abstract state-feedback controller on the abstraction. In this context, our contribution is two-fold: (I) we define a suitable relation between the original systemand its abstractionwhich characterizes the soundness and completeness conditions for an abstract state-feedback controller to be refined to a concrete output-feedback controller for the original system, and (II) we provide an algorithm to compute a sound finite-state abstraction fulfilling this relation. Our relation generalizes feedback-refinement relations fromABCD with state-feedback. Our algorithm for constructing sound finitestate abstractions is inspired by the simultaneous reachability and bisimulation minimization algorithm of Lee and Yannakakis. We lift their idea to the computation of an observation-equivalent system and show how sound abstractions can be obtained by stopping this algorithm at any point. Additionally, our new algorithm produces a realization of the topological closure of the input/output behavior of the original system if it is finite state realizable.
  5. This paper presents a new method of controller synthesis for hidden mode switched systems, where the disturbances are the quantities that are affected by the unobserved switches. Rather than using model discrimination techniques that rely on modifying desired control actions to achieve identification, the controller uses consistency sets which map the measured external behaviors to a belief about which mode signal is being executed and a control action. This hybrid controller is a prefix-based controller, where the prefixes come from an offline constructed belief graph that incorporates prior information about switching sequences with potential reachable sets of the dynamics. While the mode signal is hidden to the controller, the system’s location on the belief graph is fully observed and allows for this problem to be transformed into a design problem in which a discrete mode, in terms of beliefs, is directly observed. Finally, it is shown that affine controllers dependent on prefixes of such beliefs can be synthesized via linear programming.
  6. We present a method for learning multi-stage tasks from demonstrations by learning the logical structure and atomic propositions of a consistent linear temporal logic (LTL) formula. The learner is given successful but potentially suboptimal demonstrations, where the demonstrator is optimizing a cost function while satisfying the LTL formula, and the cost function is uncertain to the learner. Our algorithm uses the Karush-Kuhn-Tucker (KKT) optimality conditions of the demonstrations together with a counterexample-guided falsification strategy to learn the atomic proposition parameters and logical structure of the LTL formula, respectively. We provide theoretical guarantees on the conservativeness of the recovered atomic proposition sets,as well as completeness in the search for finding an LTL formula consistent with the demonstrations. We evaluate our method on high-dimensional nonlinear systems by learning LTL formulas explaining multi-stage tasks on 7-DOF arm and quadrotor systems and show that it outperforms competing methods for learning LTL formulas from positive examples.