skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Convergence Analysis for an Online Data-Driven Feedback Control Algorithm
This paper presents convergence analysis of a novel data-driven feedback control algorithm designed for generating online controls based on partial noisy observational data. The algorithm comprises a particle filter-enabled state estimation component, estimating the controlled system’s state via indirect observations, alongside an efficient stochastic maximum principle-type optimal control solver. By integrating weak convergence techniques for the particle filter with convergence analysis for the stochastic maximum principle control solver, we derive a weak convergence result for the optimization procedure in search of optimal data-driven feedback control. Numerical experiments are performed to validate the theoretical findings.  more » « less
Award ID(s):
2142672
PAR ID:
10596524
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
MDPI
Date Published:
Journal Name:
Mathematics
Volume:
12
Issue:
16
ISSN:
2227-7390
Page Range / eLocation ID:
2584
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract In this paper, we consider stochastic optimal control of systems driven by stochastic differential equations with irregular drift coefficient. We establish a necessary and sufficient stochastic maximum principle. To achieve this, we first derive an explicit representation of the first variation process (in the Sobolev sense) of the controlled diffusion. Since the drift coefficient is not smooth, the representation is given in terms of the local time of the state process. Then we construct a sequence of optimal control problems with smooth coefficients by an approximation argument. Finally, we use Ekeland’s variational principle to obtain an approximating adjoint process from which we derive the maximum principle by passing to the limit. The work is notably motivated by the optimal consumption problem of investors paying wealth tax. 
    more » « less
  2. Duality between estimation and optimal control is a problem of rich historical significance. The first duality principle appears in the seminal paper of Kalman-Bucy, where the problem of minimum variance estimation is shown to be dual to a linear quadratic (LQ) optimal control problem. Duality offers a constructive proof technique to derive the Kalman filter equation from the optimal control solution. This paper generalizes the classical duality result of Kalman-Bucy to the nonlinear filter: The state evolves as a continuous-time Markov process and the observation is a nonlinear function of state corrupted by an additive Gaussian noise. A dual process is introduced as a backward stochastic differential equation (BSDE). The process is used to transform the problem of minimum variance estimation into an optimal control problem. Its solution is obtained from an application of the maximum principle, and subsequently used to derive the equation of the nonlinear filter. The classical duality result of Kalman-Bucy is shown to be a special case. 
    more » « less
  3. We study Mean Field stochastic control problems where the cost function and the state dynamics depend upon the joint distribution of the controlled state and the control process. We prove suitable versions of the Pontryagin stochastic maximum principle, both in necessary and in sufficient form, which extend the known conditions to this general framework. Furthermore, we suggest a variational approach to study a weak formulation of these control problems. We show a natural connection between this weak formulation and optimal transport on path space, which inspires a novel discretization scheme. 
    more » « less
  4. In modern machine learning systems, distributed algorithms are deployed across applications to ensure data privacy and optimal utilization of computational resources. This work offers a fresh perspective to model, analyze, and design distributed optimization algorithms through the lens of stochastic multi-rate feedback control. We show that a substantial class of distributed algorithms—including popular Gradient Tracking for decentralized learning, and FedPD and Scaffold for federated learning—can be modeled as a certain discrete-time stochastic feedback-control system, possibly with multiple sampling rates. This key observation allows us to develop a generic framework to analyze the convergence of the entire algorithm class. It also enables one to easily add desirable features such as differential privacy guarantees, or to deal with practical settings such as partial agent participation, communication compression, and imperfect communication in algorithm design and analysis. 
    more » « less
  5. Duality of control and estimation allows mapping recent advances in data-guided control to the estimation setup. This paper formalizes and utilizes such a mapping to consider learning the optimal (steady-state) Kalman gain when process and measurement noise statistics are unknown. Specifically, building on the duality between synthesizing optimal control and estimation gains, the filter design problem is formalized as direct policy learning. In this direction, the duality is used to extend existing theoretical guarantees of direct policy updates for Linear Quadratic Regulator (LQR) to establish global convergence of the Gradient Descent (GD) algorithm for the estimation problem–while addressing subtle differences between the two synthesis problems. Subsequently, a Stochastic Gradient Descent (SGD) approach is adopted to learn the optimal Kalman gain without the knowledge of noise covariances. The results are illustrated via several numerical examples. 
    more » « less