skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Sequential convex programming for non-linear stochastic optimal control
This work introduces a sequential convex programming framework for non-linear, finitedimensional stochastic optimal control, where uncertainties are modeled by a multidimensional Wiener process. We prove that any accumulation point of the sequence of iterates generated by sequential convex programming is a candidate locally-optimal solution for the original problem in the sense of the stochastic Pontryagin Maximum Principle. Moreover, we provide sufficient conditions for the existence of at least one such accumulation point. We then leverage these properties to design a practical numerical method for solving non-linear stochastic optimal control problems based on a deterministic transcription of stochastic sequential convex programming.  more » « less
Award ID(s):
1931815
PAR ID:
10377627
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
28
ISSN:
1292-8119
Page Range / eLocation ID:
64
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we propose a convex optimization approach to chance-constrained drift counteraction optimal control (DCOC) problems for linear systems with additive stochastic disturbances. Chance-constrained DCOC aims to compute an optimal control law to maximize the time duration before the probability of violating a prescribed set of constraints can no longer be maintained to be below a specified risk level. While conventional approaches to this problem involve solving a mixed-integer programming problem, we show that an optimal solution to the problem can also be found by solving a convex second-order cone programming problem without integer variables. We illustrate the application of chance-constrained DCOC to an automotive adaptive cruise control example. 
    more » « less
  2. We study stochastic gradient descent (SGD) in the master-worker architecture under Byzantine attacks. Building upon the recent advances in algorithmic high-dimensional robust statistics, in each SGD iteration, master employs a non-trivial decoding to estimate the true gradient from the unbiased stochastic gradients received from workers, some of which may be corrupt. We provide convergence analyses for both strongly-convex and non-convex smooth objectives under standard SGD assumptions. We can control the approximation error of our solution in both these settings by the mini-batch size of stochastic gradients; and we can make the approximation error as small as we want, provided that workers use a sufficiently large mini-batch size. Our algorithm can tolerate less than 1/3 fraction of Byzantine workers. It can approximately find the optimal parameters in the strongly-convex setting exponentially fast, and reaches to an approximate stationary point in the non-convex setting with linear speed, i.e., with a rate of 1/T, thus, matching the convergence rates of vanilla SGD in the Byzantine-free setting. 
    more » « less
  3. In this paper, we address a finite-horizon stochastic optimal control problem with covariance assignment and input energy constraints for discrete-time stochastic linear systems with partial state information. In our approach, we consider separation-based control policies that correspond to sequences of control laws that are affine functions of either the complete history of the output estimation errors, that is, the differences between the actual output measurements and their corresponding estimated outputs produced by a discrete-time Kalman filter, or a truncation of the same history. This particular feedback parametrization allows us to associate the stochastic optimal control problem with a tractable semidefinite (convex) program. We argue that the proposed procedure for the reduction of the stochastic optimal control problem to a convex program has significant advantages in terms of improved scalability and tractability over the approaches proposed in the relevant literature. 
    more » « less
  4. From optimal transport to robust dimensionality reduction, a plethora of machine learning applications can be cast into the min-max optimization problems over Riemannian manifolds. Though many min-max algorithms have been analyzed in the Euclidean setting, it has proved elusive to translate these results to the Riemannian case. Zhang et al. [2022] have recently shown that geodesic convex concave Riemannian problems always admit saddle-point solutions. Inspired by this result, we study whether a performance gap between Riemannian and optimal Euclidean space convex-concave algorithms is necessary. We answer this question in the negative—we prove that the Riemannian corrected extragradient (RCEG) method achieves last-iterate convergence at a linear rate in the geodesically strongly-convex-concave case, matching the Euclidean result. Our results also extend to the stochastic or non-smooth case where RCEG and Riemanian gradient ascent descent (RGDA) achieve near-optimal convergence rates up to factors depending on curvature of the manifold. 
    more » « less
  5. We present a data-driven algorithm for efficiently computing stochastic control policies for general joint chance constrained optimal control problems. Our approach leverages the theory of kernel distribution embeddings, which allows representing expectation operators as inner products in a reproducing kernel Hilbert space. This framework enables approximately reformulating the original problem using a dataset of observed trajectories from the system without imposing prior assumptions on the parameterization of the system dynamics or the structure of the uncertainty. By optimizing over a finite subset of stochastic open-loop control trajectories, we relax the original problem to a linear program over the control parameters that can be efficiently solved using standard convex optimization techniques. We demonstrate our proposed approach in simulation on a system with nonlinear non-Markovian dynamics navigating in a cluttered environment. 
    more » « less