By exploiting min-plus linearity, semiconcavity, and semigroup properties of dynamic programming, a fundamental solution semigroup for a class of approximate finite horizon linear infinite dimensional optimal control problems is constructed. Elements of this fundamental solution semigroup are parameterized by the time horizon, and can be used to approximate the solution of the corresponding finite horizon optimal control problem for any terminal cost. They can also be composed to compute approximations on longer horizons. The value function approximation provided takes the form of a min-plus convolution of a kernel with the terminal cost. A general construction for this kernel is provided, along with a spectral representation for a restricted class of sub-problems.
more »
« less
Optimal Control With Maximum Cost Performance Measure
Optimal control of discrete-time systems with a less common performance measure is investigated, in which the cost function to be minimized is the maximum, instead of the sum, of a cost per stage over the control time. Three control scenarios are studied under a finite-horizon, a discounted infinite-horizon, and an undiscounted infinite-horizon performance measure. For each case, the Bellman equation is derived by direct use of dynamic programming, and the necessary and sufficient conditions for an optimal control are established around this equation. A motivating example on optimal control of dc-dc buck power converters is presented.
more »
« less
- Award ID(s):
- 1941944
- PAR ID:
- 10212168
- Date Published:
- Journal Name:
- IEEE Control Systems Letters
- ISSN:
- 2475-1456
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
This paper is concerned with an optimal control problem for a mean-field linear stochastic differential equation with a quadratic functional in the infinite time horizon. Under suitable conditions, including the stabilizability, the (strong) exponential, integral, and mean-square turnpike properties for the optimal pair are established. The keys are to correctly formulate the corresponding static optimization problem and find the equations determining the correction processes. These have revealed the main feature of the stochastic problems which are significantly different from the deterministic version of the theory.more » « less
-
In this paper, we study the event-triggered robust stabilization problem of nonlinear systems subject to mismatched perturbations and input constraints. First, with the introduction of an infinite-horizon cost function for the auxiliary system, we transform the robust stabilization problem into a constrained optimal control problem. Then, we prove that the solution of the event-triggered Hamilton–Jacobi–Bellman (ETHJB) equation, which arises in the constrained optimal control problem, guarantees original system states to be uniformly ultimately bounded (UUB). To solve the ETHJB equation, we present a single network adaptive critic design (SN-ACD). The critic network used in the SN-ACD is tuned through the gradient descent method. By using Lyapunov method, we demonstrate that all the signals in the closed-loop auxiliary system are UUB. Finally, we provide two examples, including the pendulum system, to validate the proposed event-triggered control strategy.more » « less
-
Abstract The Moore-Gibson-Thompson [MGT] dynamics is considered. This third order in time evolution arises within the context of acoustic wave propagation with applications in high frequency ultrasound technology. The optimal boundary feedback control is constructed in order to have on-line regulation. The above requires wellposedness of the associated Algebraic Riccati Equation. The paper by Lasiecka and Triggiani (2022) recently contributed a comprehensive study of the Optimal Control Problem for the MGT-third order dynamics with boundary control, over an infinite time-horizon. A critical missing point in such a study is the issue of uniqueness (within a specific class) of the corresponding highly non-standard Algebraic Riccati Equation. The present note resolves this problem in the positive, thus completing the study of Lasiecka and Triggiani (2022) with the final goal of having on line feedback control, which is also optimal.more » « less
-
Most of the existing methodologies for control of Gene Regulatory Networks (GRNs) assume that the immediate cost function at each state and time point is fully known. In this paper, we introduce an optimal control strategy for control of GRNs with unknown or partially-known immediate cost function. Toward this, we adopt a partially-observed Boolean dynamical system (POBDS) model for the GRN and propose an Inverse Reinforcement Learning (IRL) methodology for quantifying the imperfect behavior of experts, obtained via prior biological knowledge or experimental data. The constructed cost function then is used in finding the optimal infinite-horizon control strategy for the POBDS. The application of the proposed method using a single sequence of experimental data is investigated through numerical experiments using a melanoma gene regulatory network.more » « less