skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: P1 finite element methods for a weighted elliptic state-constrained optimal control problem
We investigate a P1 finite element method for a two-dimensional weighted optimal control problem arising from a three-dimensional axisymmetric elliptic state-constrained optimal control problem with Dirichlet boundary conditions.  more » « less
Award ID(s):
1913050 1913229
PAR ID:
10163850
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Numerical Algorithms
ISSN:
1017-1398
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper, we study a natural optimal control problem associated to the Paneitz obstacle problem on closed 4-dimensional Riemannian manifolds. We show the existence of an optimal control which is an optimal state and induces also a conformal metric with prescribed Q -curvature. We show also C ∞ -regularity of optimal controls and some compactness results for the optimal controls. In the case of the 4-dimensional standard sphere, we characterize all optimal controls. 
    more » « less
  2. This article investigates a stochastic optimal control problem with linear Gaussian dynamics, quadratic performance measure, but non-Gaussian observations. The linear Gaussian dynamics characterizes a large number of interacting agents evolving under a centralized control and external disturbances. The aggregate state of the agents is only partially known to the centralized controller by means of the samples taken randomly in time and from anonymous randomly selected agents. Due to removal of the agent identity from the samples, the observation set has a non-Gaussian structure, and as a consequence, the optimal control law that minimizes a quadratic cost is essentially nonlinear and infinite-dimensional, for any finite number of agents. For infinitely many agents, however, this paper shows that the optimal control law is the solution to a reduced order, finite-dimensional linear quadratic Gaussian problem with Gaussian observations sampled only in time. For this problem, the separation principle holds and is used to develop an explicit optimal control law by combining a linear quadratic regulator with a separately designed finite-dimensional minimum mean square error state estimator. Conditions are presented under which this simple optimal control law can be adopted as a suboptimal control law for finitely many agents. 
    more » « less
  3. A structure detection method is developed for solving state-variable inequality path con- strained optimal control problems. The method obtains estimates of activation and deactiva- tion times of active state-variable inequality path constraints (SVICs), and subsequently al- lows for the times to be included as decision variables in the optimization process. Once the identification step is completed, the method partitions the problem into a multiple-domain formulation consisting of constrained and unconstrained domains. Within each domain, Legendre-Gauss-Radau (LGR) orthogonal direct collocation is used to transcribe the infinite- dimensional optimal control problem into a finite-dimensional nonlinear programming (NLP) problem. Within constrained domains, the corresponding time derivative of the active SVICs that are explicit in the control are enforced as equality path constraints, and at the beginning of the constrained domains, the necessary tangency conditions are enforced. The accuracy of the proposed method is demonstrated on a well-known optimal control problem where the analytical solution contains a state constrained arc. 
    more » « less
  4. The Partial Integral Equation (PIE) framework provides a unified algebraic representation for use in analysis, control, and estimation of infinite-dimensional systems. However, the presence of input delays results in a PIE representation with dependence on the derivative of the control input, u˙. This dependence complicates the problem of optimal state-feedback control for systems with input delay – resulting in a bilinear optimization problem. In this paper, we present two strategies for convexification of the H∞-optimal state-feedback control problem for systems with input delay. In the first strategy, we use a generalization of Young's inequality to formulate a convex optimization problem, albeit with some conservatism. In the second strategy, we filter the actuator signal – introducing additional dynamics, but resulting in a convex optimization problem without conservatism. We compare these two optimal control strategies on four example problems, solving the optimization problem using the latest release of the PIETOOLS software package for analysis, control and simulation of PIEs. 
    more » « less
  5. We present a neural network approach for approximating the value function of high- dimensional stochastic control problems. Our training process simultaneously updates our value function estimate and identifies the part of the state space likely to be visited by optimal trajectories. Our approach leverages insights from optimal control theory and the fundamental relation between semi-linear parabolic partial differential equations and forward-backward stochastic differential equations. To focus the sampling on relevant states during neural network training, we use the stochastic Pontryagin maximum principle (PMP) to obtain the optimal controls for the current value function estimate. By design, our approach coincides with the method of characteristics for the non-viscous Hamilton-Jacobi-Bellman equation arising in deterministic control problems. Our training loss consists of a weighted sum of the objective functional of the control problem and penalty terms that enforce the HJB equations along the sampled trajectories. Importantly, training is unsupervised in that it does not require solutions of the control problem. Our numerical experiments highlight our scheme’s ability to identify the relevant parts of the state space and produce meaningful value estimates. Using a two-dimensional model problem, we demonstrate the importance of the stochastic PMP to inform the sampling and compare to a finite element approach. With a nonlinear control affine quadcopter example, we illustrate that our approach can handle complicated dynamics. For a 100-dimensional benchmark problem, we demonstrate that our approach improves accuracy and time-to-solution and, via a modification, we show the wider applicability of our scheme. 
    more » « less