skip to main content


Title: Time-inconsistent stochastic optimal control problems and backward stochastic volterra integral equations
An optimal control problem is considered for a stochastic differential equation with the cost functional determined by a backward stochastic Volterra integral equation (BSVIE, for short). This kind of cost functional can cover the general discounting (including exponential and non-exponential) situations with a recursive feature. It is known that such a problem is time-inconsistent in general. Therefore, instead of finding a global optimal control, we look for a time-consistent locally near optimal equilibrium strategy. With the idea of multi-person differential games, a family of approximate equilibrium strategies is constructed associated with partitions of the time intervals. By sending the mesh size of the time interval partition to zero, an equilibrium Hamilton–Jacobi–Bellman (HJB, for short) equation is derived, through which the equilibrium value function and an equilibrium strategy are obtained. Under certain conditions, a verification theorem is proved and the well-posedness of the equilibrium HJB is established. As a sort of Feynman–Kac formula for the equilibrium HJB equation, a new class of BSVIEs (containing the diagonal value Z ( r , r ) of Z (⋅ , ⋅)) is naturally introduced and the well-posedness of such kind of equations is briefly presented.  more » « less
Award ID(s):
1812921
NSF-PAR ID:
10220031
Author(s) / Creator(s):
;
Editor(s):
Buttazzo, G.; Casas, E.; de Teresa, L.; Glowinski, R.; Leugering, G.; Trélat, E.; Zhang, X.
Date Published:
Journal Name:
ESAIM: Control, Optimisation and Calculus of Variations
Volume:
27
ISSN:
1292-8119
Page Range / eLocation ID:
22
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. An optimal control problem is considered for a stochastic differential equation containing a state-dependent regime switching, with a recursive cost functional. Due to the non-exponential discounting in the cost functional, the problem is time-inconsistent in general. Therefore, instead of finding a global optimal control (which is not possible), we look for a time-consistent (approximately) locally optimal equilibrium strategy. Such a strategy can be represented through the solution to a system of partial differential equations, called an equilibrium Hamilton–Jacob–Bellman (HJB) equation which is constructed via a sequence of multi-person differential games. A verification theorem is proved and, under proper conditions, the well-posedness of the equilibrium HJB equation is established as well. 
    more » « less
  2. An optimal control problem is considered for linear stochastic differential equations with quadratic cost functional. The coefficients of the state equation and the weights in the cost functional are bounded operators on the spaces of square integrable random variables. The main motivation of our study is linear quadratic (LQ, for short) optimal control problems for mean-field stochastic differential equations. Open-loop solvability of the problem is characterized as the solvability of a system of linear coupled forward-backward stochastic differential equations (FBSDE, for short) with operator coefficients, together with a convexity condition for the cost functional. Under proper conditions, the well-posedness of such an FBSDE, which leads to the existence of an open-loop optimal control, is established. Finally, as applications of our main results, a general mean-field LQ control problem and a concrete mean-variance portfolio selection problem in the open-loop case are solved. 
    more » « less
  3. null (Ed.)
    This paper studies an optimal stochastic impulse control problem in a finite time horizon with a decision lag, by which we mean that after an impulse is made, a fixed number units of time has to be elapsed before the next impulse is allowed to be made. The continuity of the value function is proved. A suitable version of dynamic programming principle is established, which takes into account the dependence of state process on the elapsed time. The corresponding Hamilton-Jacobi-Bellman (HJB) equation is derived, which exhibits some special feature of the problem. The value function of this optimal impulse control problem is characterized as the unique viscosity solution to the corresponding HJB equation. An optimal impulse control is constructed provided the value function is given. Moreover, a limiting case with the waiting time approaching 0 is discussed. 
    more » « less
  4. Stochastic differential games have been used extensively to model agents' competitions in finance, for instance, in P2P lending platforms from the Fintech industry, the banking system for systemic risk, and insurance markets. The recently proposed machine learning algorithm, deep fictitious play, provides a novel and efficient tool for finding Markovian Nash equilibrium of large \begin{document}$ N $\end{document}-player asymmetric stochastic differential games [J. Han and R. Hu, Mathematical and Scientific Machine Learning Conference, pages 221-245, PMLR, 2020]. By incorporating the idea of fictitious play, the algorithm decouples the game into \begin{document}$ N $\end{document} sub-optimization problems, and identifies each player's optimal strategy with the deep backward stochastic differential equation (BSDE) method parallelly and repeatedly. In this paper, we prove the convergence of deep fictitious play (DFP) to the true Nash equilibrium. We can also show that the strategy based on DFP forms an \begin{document}$ \epsilon $\end{document}-Nash equilibrium. We generalize the algorithm by proposing a new approach to decouple the games, and present numerical results of large population games showing the empirical convergence of the algorithm beyond the technical assumptions in the theorems.

     
    more » « less
  5. Abstract

    The well-posedness of stochastic Navier–Stokes equations with various noises is a hot topic in the area of stochastic partial differential equations. Recently, the consideration of stochastic Navier–Stokes equations involving fractional Laplacian has received more and more attention. Due to the scaling-invariant property of the fractional stochastic equations concerned, it is natural and also very important to study the well-posedness of stochastic fractional Navier–Stokes equations in the associated critical Fourier–Besov spaces. In this paper, we are concerned with the three-dimensional stochastic fractional Navier–Stokes equation driven by multiplicative noise. We aim to establish the well-posedness of solutions of the concerned equation. To this end, by utilising the Fourier localisation technique, we first establish the local existence and uniqueness of the solutions in the critical Fourier–Besov space$$\dot{\mathcal {B}}^{4-2\alpha -\frac{3}{p}}_{p,r}$$B˙p,r4-2α-3p. Then, under the condition that the initial date is sufficiently small, we show the global existence of the solutions in the probabilistic sense.

     
    more » « less