skip to main content


Title: Conversion of a Class of Stochastic Control Problems to Fundamental-Solution Deterministic Control Problems
A class of nonlinear, stochastic staticization control problems (including minimization problems with smooth, convex, coercive payoffs) driven by diffusion dynamics and constant diffusion coefficient is considered. Using dynamic programming and tools from static duality, a fundamental solution form is obtained where the same solution can be used for a variety of terminal costs without re-solution of the problem. Further, this fundamental solution takes the form of a deterministic control problem rather than a stochastic control problem.  more » « less
Award ID(s):
1908918
NSF-PAR ID:
10171194
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Proceedings of the American Control Conference
ISSN:
0743-1619
Page Range / eLocation ID:
2814-2819
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. A class of nonlinear, stochastic staticization control problems (including minimization problems with smooth, convex, coercive payoffs) driven by diffusion dynamics with constant diffusion coefficient is considered. A fundamental solution form is obtained where the same solution can be used for a limited variety of terminal costs without re-solution of the problem. One may convert this fundamental solution form from a stochastic control problem form to a deterministic control problem form. This yields an equivalence between certain second-order (in space) Hamilton-Jacobi partial differential equations (HJ PDEs) and associated first-order HJ PDEs. This reformulation has substantial numerical implications. 
    more » « less
  2. By exploiting min-plus linearity, semiconcavity, and semigroup properties of dynamic programming, a fundamental solution semigroup for a class of approximate finite horizon linear infinite dimensional optimal control problems is constructed. Elements of this fundamental solution semigroup are parameterized by the time horizon, and can be used to approximate the solution of the corresponding finite horizon optimal control problem for any terminal cost. They can also be composed to compute approximations on longer horizons. The value function approximation provided takes the form of a min-plus convolution of a kernel with the terminal cost. A general construction for this kernel is provided, along with a spectral representation for a restricted class of sub-problems. 
    more » « less
  3. We consider the problem of finite-horizon optimal control of a discrete linear time-varying system subject to a stochastic disturbance and fully observable state. The initial state of the system is drawn from a known Gaussian distribution, and the final state distribution is required to reach a given target Gaussian distribution, while minimizing the expected value of the control effort. We derive the linear optimal control policy by first presenting an efficient solution for the diffusion-less case, and we then solve the case with diffusion by reformulating the system as a superposition of diffusion-less systems. We show that the resulting solution coincides with a LQG problem with particular terminal cost weight matrix. 
    more » « less
  4. An optimal control problem is considered for linear stochastic differential equations with quadratic cost functional. The coefficients of the state equation and the weights in the cost functional are bounded operators on the spaces of square integrable random variables. The main motivation of our study is linear quadratic (LQ, for short) optimal control problems for mean-field stochastic differential equations. Open-loop solvability of the problem is characterized as the solvability of a system of linear coupled forward-backward stochastic differential equations (FBSDE, for short) with operator coefficients, together with a convexity condition for the cost functional. Under proper conditions, the well-posedness of such an FBSDE, which leads to the existence of an open-loop optimal control, is established. Finally, as applications of our main results, a general mean-field LQ control problem and a concrete mean-variance portfolio selection problem in the open-loop case are solved. 
    more » « less
  5. We study Mean Field stochastic control problems where the cost function and the state dynamics depend upon the joint distribution of the controlled state and the control process. We prove suitable versions of the Pontryagin stochastic maximum principle, both in necessary and in sufficient form, which extend the known conditions to this general framework. Furthermore, we suggest a variational approach to study a weak formulation of these control problems. We show a natural connection between this weak formulation and optimal transport on path space, which inspires a novel discretization scheme. 
    more » « less