skip to main content


Title: Reinforcement Learning for Adaptive Optimal Stationary Control of Linear Stochastic Systems
This article studies the adaptive optimal stationary control of continuous-time linear stochastic systems with both additive and multiplicative noises, using reinforcement learning techniques. Based on policy iteration, a novel off-policy reinforcement learning algorithm, named optimistic least-squares-based policy iteration, is proposed, which is able to find iteratively near-optimal policies of the adaptive optimal stationary control problem directly from input/state data without explicitly identifying any system matrices, starting from an initial admissible control policy. The solutions given by the proposed optimistic least-squares-based policy iteration are proved to converge to a small neighborhood of the optimal solution with probability one, under mild conditions. The application of the proposed algorithm to a triple inverted pendulum example validates its feasibility and effectiveness.  more » « less
Award ID(s):
1903781
PAR ID:
10479218
Author(s) / Creator(s):
;
Editor(s):
Alessandro Astolfi
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
Volume:
68
Issue:
4
ISSN:
0018-9286
Page Range / eLocation ID:
2383 to 2390
Subject(s) / Keyword(s):
Adaptive optimal control, data-driven control, policy iteration, reinforcement learning, robustness, stochastic control
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper studies the learning-based optimal control for a class of infinite-dimensional linear time-delay systems. The aim is to fill the gap of adaptive dynamic programming (ADP) where adaptive optimal control of infinite-dimensional systems is not addressed. A key strategy is to combine the classical model-based linear quadratic (LQ) optimal control of time-delay systems with the state-of-art reinforcement learning (RL) technique. Both the model-based and data-driven policy iteration (PI) approaches are proposed to solve the corresponding algebraic Riccati equation (ARE) with guaranteed convergence. The proposed PI algorithm can be considered as a generalization of ADP to infinite-dimensional time-delay systems. The efficiency of the proposed algorithm is demonstrated by the practical application arising from autonomous driving in mixed traffic environments, where human drivers’ reaction delay is considered. 
    more » « less
  2. This paper addresses the problem of model-free reinforcement learning for Robust Markov Decision Process (RMDP) with large state spaces. The goal of the RMDP framework is to find a policy that is robust against the parameter uncertainties due to the mismatch between the simulator model and real-world settings. We first propose the Ro- bust Least Squares Policy Evaluation algorithm, which is a multi-step online model-free learning algorithm for policy evaluation. We prove the convergence of this algorithm using stochastic approximation techniques. We then propose Robust Least Squares Policy Iteration (RLSPI) algorithm for learning the optimal robust policy. We also give a general weighted Euclidean norm bound on the error (closeness to optimality) of the resulting policy. Finally, we demonstrate the performance of our RLSPI algorithm on some standard bench- mark problems. 
    more » « less
  3. This paper studies the adaptive optimal control problem for a class of linear time-delay systems described by delay differential equations (DDEs). A crucial strategy is to take advantage of recent developments in reinforcement learning (RL) and adaptive dynamic programming (ADP) and develop novel methods to learn adaptive optimal controllers from finite samples of input and state data. In this paper, the data-driven policy iteration (PI) is proposed to solve the infinite-dimensional algebraic Riccati equation (ARE) iteratively in the absence of exact model knowledge. Interestingly, the proposed recursive PI algorithm is new in the present context of continuous-time time-delay systems, even when the model knowledge is assumed known. The efficacy of the proposed learning-based control methods is validated by means of practical applications arising from metal cutting and autonomous driving. 
    more » « less
  4. This paper presents a novel decentralized control strategy for a class of uncertain nonlinear large-scale systems with mismatched interconnections. First, it is shown that the decentralized controller for the overall system can be represented by an array of optimal control policies of auxiliary subsystems. Then, within the framework of adaptive dynamic programming, a simultaneous policy iteration (SPI) algorithm is developed to solve the Hamilton–Jacobi–Bellman equations associated with auxiliary subsystem optimal control policies. The convergence of the SPI algorithm is guaranteed by an equivalence relationship. To implement the present SPI algorithm, actor and critic neural networks are applied to approximate the optimal control policies and the optimal value functions, respectively. Meanwhile, both the least squares method and the Monte Carlo integration technique are employed to derive the unknown weight parameters. Furthermore, by using Lyapunov’s direct method, the overall system with the obtained decentralized controller is proved to be asymptotically stable. Finally, the effectiveness of the proposed decentralized control scheme is illustrated via simulations for nonlinear plants and unstable power systems. 
    more » « less
  5. This paper presents a first solution to the problem of adaptive LQR for continuous-time linear periodic systems. Specifically, reinforcement learning and adaptive dynamic programming (ADP) techniques are used to develop two algorithms to obtain near-optimal controllers. Firstly, the policy iteration (PI) and value iteration (VI) methods are proposed when the model is known. Then, PI-based and VI-based off-policy ADP algorithms are derived to find near-optimal solutions directly from input/state data collected along the system trajectories, without the exact knowledge of system dynamics. The effectiveness of the derived algorithms is validated using the well-known lossy Mathieu equation. 
    more » « less