- Award ID(s):
- 1908918
- PAR ID:
- 10500956
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- Proceedings of the American Control Conference
- ISSN:
- 2378-5861
- ISBN:
- 979-8-3503-2806-6
- Page Range / eLocation ID:
- 2733 to 2738
- Subject(s) / Keyword(s):
- Nonlinear, control theory, numerical methods, game theory
- Format(s):
- Medium: X
- Location:
- San Diego, CA, USA
- Sponsoring Org:
- National Science Foundation
More Like this
-
By exploiting min-plus linearity, semiconcavity, and semigroup properties of dynamic programming, a fundamental solution semigroup for a class of approximate finite horizon linear infinite dimensional optimal control problems is constructed. Elements of this fundamental solution semigroup are parameterized by the time horizon, and can be used to approximate the solution of the corresponding finite horizon optimal control problem for any terminal cost. They can also be composed to compute approximations on longer horizons. The value function approximation provided takes the form of a min-plus convolution of a kernel with the terminal cost. A general construction for this kernel is provided, along with a spectral representation for a restricted class of sub-problems.more » « less
-
null (Ed.)This paper studies an optimal stochastic impulse control problem in a finite time horizon with a decision lag, by which we mean that after an impulse is made, a fixed number units of time has to be elapsed before the next impulse is allowed to be made. The continuity of the value function is proved. A suitable version of dynamic programming principle is established, which takes into account the dependence of state process on the elapsed time. The corresponding Hamilton-Jacobi-Bellman (HJB) equation is derived, which exhibits some special feature of the problem. The value function of this optimal impulse control problem is characterized as the unique viscosity solution to the corresponding HJB equation. An optimal impulse control is constructed provided the value function is given. Moreover, a limiting case with the waiting time approaching 0 is discussed.more » « less
-
Summary This paper studies adaptive model predictive control (AMPC) of systems with time‐varying and potentially state‐dependent uncertainties. We propose an estimation and prediction architecture within the min‐max MPC framework. An adaptive estimator is presented to estimate the set‐valued measures of the uncertainty using piecewise constant adaptive law, which can be arbitrarily accurate if the sampling period in adaptation is small enough. Based on such measures, a prediction scheme is provided that predicts the time‐varying feasible set of the uncertainty over the prediction horizon. We show that if the uncertainty and its first derivatives are locally Lipschitz, the stability of the system with AMPC can always be guaranteed under the standard assumptions for traditional min‐max MPC approaches, while the AMPC algorithm enhances the control performance by efficiently reducing the size of the feasible set of the uncertainty in min‐max MPC setting. Copyright © 2017 John Wiley & Sons, Ltd.
-
Abstract Information technology (IT) infrastructure relies on a globalized supply chain that is vulnerable to numerous risks from adversarial attacks. It is important to protect IT infrastructure from these dynamic, persistent risks by delaying adversarial exploits. In this paper, we propose max‐min interdiction models for critical infrastructure protection that prioritizes cost‐effective security mitigations to maximally delay adversarial attacks. We consider attacks originating from multiple adversaries, each of which aims to find a “critical path” through the attack surface to complete the corresponding attack as soon as possible. Decision‐makers can deploy mitigations to delay attack exploits, however, mitigation effectiveness is sometimes uncertain. We propose a stochastic model variant to address this uncertainty by incorporating random delay times. The proposed models can be reformulated as a nested max‐max problem using dualization. We propose a Lagrangian heuristic approach that decomposes the max‐max problem into a number of smaller subproblems, and updates upper and lower bounds to the original problem via subgradient optimization. We evaluate the perfect information solution value as an alternative method for updating the upper bound. Computational results demonstrate that the Lagrangian heuristic identifies near‐optimal solutions efficiently, which outperforms a general purpose mixed‐integer programming solver on medium and large instances.
-
Motivated by various distributed control applications, we consider a linear system with Gaussian noise observed by multiple sensors which transmit measurements over a dynamic lossy network. We characterize the stationary optimal sensor scheduling policy for the finite horizon, discounted, and long-term average cost problems and show that the value iteration algorithm converges to a solution of the average cost problem. We further show that the suboptimal policies provided by the rolling horizon truncation of the value iteration also guarantee geometric ergodicity and provide near-optimal average cost. Lastly, we provide qualitative characterizations of the multidimensional set of measurement loss rates for which the system is stabilizable for a static network, significantly extending earlier results on intermittent observations.more » « less