skip to main content

Title: A Continuous Optimization Approach to Drift Counteraction Optimal Control
Drift counteraction optimal control (DCOC) aims at optimizing control to maximize the time (or a yield) until the system trajectory exits a prescribed set, which may represent safety constraints, operating limits, and/or efficiency requirements. To DCOC problems formulated in discrete time, conventional approaches were based on dynamic programming (DP) or mixed-integer programming (MIP), which could become computationally prohibitive for higher-order systems. In this paper, we propose a novel approach to discrete-time DCOC problems based on a nonlinear programming formulation with purely continuous variables. We show that this new continuous optimization-based approach leads to the same exit time as the conventional MIP-based approach, while being computationally more efficient than the latter. This is also illustrated by numerical examples representing the drift counteraction control for an indoor airship.
; ; ;
Award ID(s):
Publication Date:
Journal Name:
Proceedings of 2021 American Control Conference
Page Range or eLocation-ID:
3824 to 3829
Sponsoring Org:
National Science Foundation
More Like this
  1. With increase in the frequency of natural disasters such as hurricanes that disrupt the supply from the grid, there is a greater need for resiliency in electric supply. Rooftop solar photovoltaic (PV) panels along with batteries can provide resiliency to a house in a blackout due to a natural disaster. Our previous work showed that intelligence can reduce the size of a PV+battery system for the same level of post-blackout service compared to a conventional system that does not employ intelligent control. The intelligent controller proposed is based on model predictive control (MPC), which has two main challenges. One, it requires simple yet accurate models as it involves real-time optimization. Two, the discrete actuation for residential loads (on/off) makes the underlying optimization problem a mixed-integer program (MIP) which is challenging to solve. An attractive alternative to MPC is reinforcement learning (RL) as the real-time control computation is both model-free and simple. These points of interest accompany certain trade-offs; RL requires computationally expensive offline learning, and its performance is sensitive to various design choices. In this work, we propose an RL-based controller. We compare its performance with the MPC controller proposed in our prior work and a non-intelligent baseline controller. Themore »RL controller is found to provide a resiliency performance — by commanding critical loads and batteries—similar to MPC with a significant reduction in computational effort.« less
  2. The design of cyber-physical systems (CPSs) requires methods and tools that can efficiently reason about the interaction between discrete models, e.g., representing the behaviors of ``cyber'' components, and continuous models of physical processes. Boolean methods such as satisfiability (SAT) solving are successful in tackling large combinatorial search problems for the design and verification of hardware and software components. On the other hand, problems in control, communications, signal processing, and machine learning often rely on convex programming as a powerful solution engine. However, despite their strengths, neither approach would work in isolation for CPSs. In this paper, we present a new satisfiability modulo convex programming (SMC) framework that integrates SAT solving and convex optimization to efficiently reason about Boolean and convex constraints at the same time. We exploit the properties of a class of logic formulas over Boolean and nonlinear real predicates, termed monotone satisfiability modulo convex formulas, whose satisfiability can be checked via a finite number of convex programs. Following the lazy satisfiability modulo theory (SMT) paradigm, we develop a new decision procedure for monotone SMC formulas, which coordinates SAT solving and convex programming to provide a satisfying assignment or determine that the formula is unsatisfiable. A key step inmore »our coordination scheme is the efficient generation of succinct infeasibility proofs for inconsistent constraints that can support conflict-driven learning and accelerate the search. We demonstrate our approach on different CPS design problems, including spacecraft docking mission control, robotic motion planning, and secure state estimation. We show that SMC can handle more complex problem instances than state-of-the-art alternative techniques based on SMT solving and mixed integer convex programming.« less
  3. Sequential Convex Programming (SCP) has recently gained significant popularity as an effective method for solving optimal control problems and has been successfully applied in several different domains. However, the theoretical analysis of SCP has received comparatively limited attention, and it is often restricted to discrete-time formulations. In this paper, we present a unifying theoretical analysis of a fairly general class of SCP procedures for continuous-time optimal control problems. In addition to the derivation of convergence guarantees in a continuous-time setting, our analysis reveals two new numerical and practical insights. First, we show how one can more easily account for manifold-type constraints, which are a defining feature of optimal control of mechanical systems. Second, we show how our theoretical analysis can be leveraged to accelerate SCP-based optimal control methods by infusing techniques from indirect optimal control.
  4. In this paper, we propose a convex optimization approach to chance-constrained drift counteraction optimal control (DCOC) problems for linear systems with additive stochastic disturbances. Chance-constrained DCOC aims to compute an optimal control law to maximize the time duration before the probability of violating a prescribed set of constraints can no longer be maintained to be below a specified risk level. While conventional approaches to this problem involve solving a mixed-integer programming problem, we show that an optimal solution to the problem can also be found by solving a convex second-order cone programming problem without integer variables. We illustrate the application of chance-constrained DCOC to an automotive adaptive cruise control example.
  5. We study the problem of learning sequential decision-making policies in settings with multiple state-action representations. Such settings naturally arise in many domains, such as planning (e.g., multiple integer programming formulations) and various combinatorial optimization problems (e.g., those with both integer programming and graph-based formulations). Inspired by the classical co-training framework for classification, we study the problem of co-training for policy learning. We present sufficient conditions under which learning from two views can improve upon learning from a single view alone. Motivated by these theoretical insights, we present a meta-algorithm for co-training for sequential decision making. Our framework is compatible with both reinforcement learning and imitation learning. We validate the effectiveness of our approach across a wide range of tasks, including discrete/continuous control and combinatorial optimization.