skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 8:00 PM ET on Friday, March 21 until 8:00 AM ET on Saturday, March 22 due to maintenance. We apologize for the inconvenience.


This content will become publicly available on October 1, 2025

Title: Linear Optimal Regulation to Zero Dynamics
A general problem encompassing output regulation and pattern generation can be formulated as the design of controllers to achieve convergence to a persistent trajectory within the zero dynamics on which an output vanishes. We develop an optimal control theory for such design by adding the requirement to minimize the H2 norm of a closed-loop transfer function. Within the framework of eigenstructure assignment, the optimal control is proven identical to the standard H2 control in form. However, the solution to the Riccati equation for the linear quadratic regulator is not stabilizing. Instead it partially stabilizes the closed-loop dynamics excluding the zero dynamics. The optimal control architecture is shown to have the feedback of the deviation from the subspace of the zero dynamics and the feedforward of the control input to remain in the subspace.  more » « less
Award ID(s):
2113528
PAR ID:
10552123
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
Volume:
69
Issue:
10
ISSN:
0018-9286
Page Range / eLocation ID:
7088 to 7095
Subject(s) / Keyword(s):
Eigenstructure assignment linear systems output regulation optimal control pattern formation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. This paper is concerned with two-person mean-field linear-quadratic non-zero sum stochastic differential games in an infinite horizon. Both open-loop and closed-loop Nash equilibria are introduced. The existence of an open-loop Nash equilibrium is characterized by the solvability of a system of mean-field forward-backward stochastic differential equations in an infinite horizon and the convexity of the cost functionals, and the closed-loop representation of an open-loop Nash equilibrium is given through the solution to a system of two coupled non-symmetric algebraic Riccati equations. The existence of a closed-loop Nash equilibrium is characterized by the solvability of a system of two coupled symmetric algebraic Riccati equations. Two-person mean-field linear-quadratic zero-sum stochastic differential games in an infinite horizon are also considered. Both the existence of open-loop and closed-loop saddle points are characterized by the solvability of a system of two coupled generalized algebraic Riccati equations with static stabilizing solutions. Mean-field linear-quadratic stochastic optimal control problems in an infinite horizon are discussed as well, for which it is proved that the open-loop solvability and closed-loop solvability are equivalent. 
    more » « less
  2. null (Ed.)
    A closed-loop control algorithm for the reduction of turbulent flow separation over NACA 0015 airfoil equipped with leading-edge synthetic jet actuators (SJAs) is presented. A system identification approach based on Nonlinear Auto-Regressive Moving Average with eXogenous inputs (NARMAX) technique was used to predict nonlinear dynamics of the fluid flow and for the design of the controller system. Numerical simulations based on URANS equations are performed at Reynolds number of 106 for various airfoil incidences with and without closed-loop control. The NARMAX model for flow over an airfoil is based on the static pressure data, and the synthetic jet actuator is developed using an incompressible flow model. The corresponding NARMAX identification model developed for the pressure data is nonlinear; therefore, the describing function technique is used to linearize the system within its frequency range. Low-pass filtering is used to obtain quasi-linear state values, which assist in the application of linear control techniques. The reference signal signifies the condition of a fully re-attached flow, and it is determined based on the linearization of the original signal during open-loop control. The controller design follows the standard proportional-integral (PI) technique for the single-input single-output system. The resulting closed-loop response tracks the reference value and leads to significant improvements in the transient response over the open-loop system. The NARMAX controller enhances the lift coefficient from 0.787 for the uncontrolled case to 1.315 for the controlled case with an increase of 67.1%. 
    more » « less
  3. Lawrence, N (Ed.)
    This paper addresses the end-to-end sample complexity bound for learning in closed loop the state estimator-based robust H2 controller for an unknown (possibly unstable) Linear Time Invariant (LTI) system, when given a fixed state-feedback gain. We build on the results from Ding et al. (1994) to bridge the gap between the parameterization of all state-estimators and the celebrated Youla parameterization. Refitting the expression of the relevant closed loop allows for the optimal linear observer problem given a fixed state feedback gain to be recast as a convex problem in the Youla parameter. The robust synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant, such that a min-max optimization problem is formulated for the robust H2 controller via an observer approach. The closed-loop identification scheme follows Zhang et al. (2021), where the nominal model of the true plant is identified by constructing a Hankel-like matrix from a single time-series of noisy, finite length input-output data by using the ordinary least squares algorithm from Sarkar et al. (2020). Finally, a H∞ bound on the estimated model error is provided, as the robust synthesis procedure requires bounded additive uncertainty on the coprime factors of the model. Reference Zhang et al. (2022b) is the extended version of this paper. 
    more » « less
  4. This paper addresses the problem of learning the optimal control policy for a nonlinear stochastic dynam- ical. This problem is subject to the ‘curse of dimension- ality’ associated with the dynamic programming method. This paper proposes a novel decoupled data-based con- trol (D2C) algorithm that addresses this problem using a decoupled, ‘open-loop - closed-loop’, approach. First, an open-loop deterministic trajectory optimization problem is solved using a black-box simulation model of the dynamical system. Then, closed-loop control is developed around this open-loop trajectory by linearization of the dynamics about this nominal trajectory. By virtue of linearization, a linear quadratic regulator based algorithm can be used for this closed-loop control. We show that the performance of D2C algorithm is approximately optimal. Moreover, simulation performance suggests a significant reduction in training time compared to other state of the art algorithms. 
    more » « less
  5. In feedback control of dynamical systems, the choice of a higher loop gain is typically desirable to achieve a faster closed-loop dynamics, smaller tracking error, and more effective disturbance suppression. Yet, an increased loop gain requires a higher control effort, which can extend beyond the actuation capacity of the feedback system and intermittently cause actuator saturation. To benefit from the advantages of a high feedback gain and simultaneously avoid actuator saturation, this paper advocates a dynamic gain adaptation technique in which the loop gain is lowered whenever necessary to prevent actuator saturation, and is raised again whenever possible. This concept is optimized for linear systems based on an optimal control formulation inspired by the notion of linear quadratic regulator (LQR). The quadratic cost functional adopted in LQR is modified into a certain quasi-quadratic form in which the control cost is dynamically emphasized or deemphasized as a function of the system state. The optimal control law resulted from this quasi-quadratic cost functional is essentially nonlinear, but its structure resembles an LQR with an adaptable gain adjusted by the state of system, aimed to prevent actuator saturation. Moreover, under mild assumptions analogous to those of LQR, this optimal control law is stabilizing. As an illustrative example, application of this optimal control law in feedback design for dc servomotors is examined, and its performance is verified by numerical simulations. 
    more » « less