Motivated by various distributed control applications, we consider a linear system with Gaussian noise observed by multiple sensors which transmit measurements over a dynamic lossy network. We characterize the stationary optimal sensor scheduling policy for the finite horizon, discounted, and long-term average cost problems and show that the value iteration algorithm converges to a solution of the average cost problem. We further show that the suboptimal policies provided by the rolling horizon truncation of the value iteration also guarantee geometric ergodicity and provide near-optimal average cost. Lastly, we provide qualitative characterizations of the multidimensional set of measurement loss rates for which the system is stabilizable for a static network, significantly extending earlier results on intermittent observations.
more »
« less
A min-plus fundamental solution semigroup for a class of approximate infinite dimensional optimal control problems
By exploiting min-plus linearity, semiconcavity, and semigroup properties of dynamic programming, a fundamental solution semigroup for a class of approximate finite horizon linear infinite dimensional optimal control problems is constructed. Elements of this fundamental solution semigroup are parameterized by the time horizon, and can be used to approximate the solution of the corresponding finite horizon optimal control problem for any terminal cost. They can also be composed to compute approximations on longer horizons. The value function approximation provided takes the form of a min-plus convolution of a kernel with the terminal cost. A general construction for this kernel is provided, along with a spectral representation for a restricted class of sub-problems.
more »
« less
- Award ID(s):
- 1908918
- PAR ID:
- 10170566
- Date Published:
- Journal Name:
- Proceedings of the American Control Conference
- ISSN:
- 0743-1619
- Page Range / eLocation ID:
- 794-799
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In an earlier paper (https://doi.org/10.1137/21M1393315), the switch point algorithm was developed for solving optimal control problems whose solutions are either singular or bang-bang or both singular and bang-bang, and which possess a finite number of jump discontinuities in an optimal control at the points in time where the solution structure changes. The class of control problems that were considered had a given initial condition, but no terminal constraint. The theory is now extended to include problems with both initial and terminal constraints, a structure that often arises in boundary-value problems. Substantial changes to the theory are needed to handle this more general setting. Nonetheless, the derivative of the cost with respect to a switch point is again the jump in the Hamiltonian at the switch point.more » « less
-
A finite horizon nonlinear optimal control problem is considered for which the associated Hamiltonian satisfies a uniform semiconcavity property with respect to its state and costate variables. It is shown that the value function for this optimal control problem is equivalent to the value of a min-max game, provided the time horizon considered is sufficiently short. This further reduces to maximization of a linear functional over a convex set. It is further proposed that the min-max game can be relaxed to a type of stat (stationary) game, in which no time horizon constraint is involved.more » « less
-
A class of nonlinear, stochastic staticization control problems (including minimization problems with smooth, convex, coercive payoffs) driven by diffusion dynamics and constant diffusion coefficient is considered. Using dynamic programming and tools from static duality, a fundamental solution form is obtained where the same solution can be used for a variety of terminal costs without re-solution of the problem. Further, this fundamental solution takes the form of a deterministic control problem rather than a stochastic control problem.more » « less
-
We consider the problem of finite-horizon optimal control of a discrete linear time-varying system subject to a stochastic disturbance and fully observable state. The initial state of the system is drawn from a known Gaussian distribution, and the final state distribution is required to reach a given target Gaussian distribution, while minimizing the expected value of the control effort. We derive the linear optimal control policy by first presenting an efficient solution for the diffusion-less case, and we then solve the case with diffusion by reformulating the system as a superposition of diffusion-less systems. We show that the resulting solution coincides with a LQG problem with particular terminal cost weight matrix.more » « less
An official website of the United States government

