skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Linear Optimal Regulation to Zero Dynamics
A general problem encompassing output regulation and pattern generation can be formulated as the design of controllers to achieve convergence to a persistent trajectory within the zero dynamics on which an output vanishes. We develop an optimal control theory for such design by adding the requirement to minimize the H2 norm of a closed-loop transfer function. Within the framework of eigenstructure assignment, the optimal control is proven identical to the standard H2 control in form. However, the solution to the Riccati equation for the linear quadratic regulator is not stabilizing. Instead it partially stabilizes the closed-loop dynamics excluding the zero dynamics. The optimal control architecture is shown to have the feedback of the deviation from the subspace of the zero dynamics and the feedforward of the control input to remain in the subspace.  more » « less
Award ID(s):
2113528
PAR ID:
10552123
Author(s) / Creator(s):
;
Publisher / Repository:
IEEE
Date Published:
Journal Name:
IEEE Transactions on Automatic Control
Volume:
69
Issue:
10
ISSN:
0018-9286
Page Range / eLocation ID:
7088 to 7095
Subject(s) / Keyword(s):
Eigenstructure assignment linear systems output regulation optimal control pattern formation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Lawrence, N (Ed.)
    This paper addresses the end-to-end sample complexity bound for learning in closed loop the state estimator-based robust H2 controller for an unknown (possibly unstable) Linear Time Invariant (LTI) system, when given a fixed state-feedback gain. We build on the results from Ding et al. (1994) to bridge the gap between the parameterization of all state-estimators and the celebrated Youla parameterization. Refitting the expression of the relevant closed loop allows for the optimal linear observer problem given a fixed state feedback gain to be recast as a convex problem in the Youla parameter. The robust synthesis procedure is performed by considering bounded additive model uncertainty on the coprime factors of the plant, such that a min-max optimization problem is formulated for the robust H2 controller via an observer approach. The closed-loop identification scheme follows Zhang et al. (2021), where the nominal model of the true plant is identified by constructing a Hankel-like matrix from a single time-series of noisy, finite length input-output data by using the ordinary least squares algorithm from Sarkar et al. (2020). Finally, a H∞ bound on the estimated model error is provided, as the robust synthesis procedure requires bounded additive uncertainty on the coprime factors of the model. Reference Zhang et al. (2022b) is the extended version of this paper. 
    more » « less
  2. null (Ed.)
    A closed-loop control algorithm for the reduction of turbulent flow separation over NACA 0015 airfoil equipped with leading-edge synthetic jet actuators (SJAs) is presented. A system identification approach based on Nonlinear Auto-Regressive Moving Average with eXogenous inputs (NARMAX) technique was used to predict nonlinear dynamics of the fluid flow and for the design of the controller system. Numerical simulations based on URANS equations are performed at Reynolds number of 106 for various airfoil incidences with and without closed-loop control. The NARMAX model for flow over an airfoil is based on the static pressure data, and the synthetic jet actuator is developed using an incompressible flow model. The corresponding NARMAX identification model developed for the pressure data is nonlinear; therefore, the describing function technique is used to linearize the system within its frequency range. Low-pass filtering is used to obtain quasi-linear state values, which assist in the application of linear control techniques. The reference signal signifies the condition of a fully re-attached flow, and it is determined based on the linearization of the original signal during open-loop control. The controller design follows the standard proportional-integral (PI) technique for the single-input single-output system. The resulting closed-loop response tracks the reference value and leads to significant improvements in the transient response over the open-loop system. The NARMAX controller enhances the lift coefficient from 0.787 for the uncontrolled case to 1.315 for the controlled case with an increase of 67.1%. 
    more » « less
  3. This paper is concerned with two-person mean-field linear-quadratic non-zero sum stochastic differential games in an infinite horizon. Both open-loop and closed-loop Nash equilibria are introduced. The existence of an open-loop Nash equilibrium is characterized by the solvability of a system of mean-field forward-backward stochastic differential equations in an infinite horizon and the convexity of the cost functionals, and the closed-loop representation of an open-loop Nash equilibrium is given through the solution to a system of two coupled non-symmetric algebraic Riccati equations. The existence of a closed-loop Nash equilibrium is characterized by the solvability of a system of two coupled symmetric algebraic Riccati equations. Two-person mean-field linear-quadratic zero-sum stochastic differential games in an infinite horizon are also considered. Both the existence of open-loop and closed-loop saddle points are characterized by the solvability of a system of two coupled generalized algebraic Riccati equations with static stabilizing solutions. Mean-field linear-quadratic stochastic optimal control problems in an infinite horizon are discussed as well, for which it is proved that the open-loop solvability and closed-loop solvability are equivalent. 
    more » « less
  4. This paper addresses the problem of learning the optimal control policy for a nonlinear stochastic dynam- ical. This problem is subject to the ‘curse of dimension- ality’ associated with the dynamic programming method. This paper proposes a novel decoupled data-based con- trol (D2C) algorithm that addresses this problem using a decoupled, ‘open-loop - closed-loop’, approach. First, an open-loop deterministic trajectory optimization problem is solved using a black-box simulation model of the dynamical system. Then, closed-loop control is developed around this open-loop trajectory by linearization of the dynamics about this nominal trajectory. By virtue of linearization, a linear quadratic regulator based algorithm can be used for this closed-loop control. We show that the performance of D2C algorithm is approximately optimal. Moreover, simulation performance suggests a significant reduction in training time compared to other state of the art algorithms. 
    more » « less
  5. Abstract External and internal convertible (EIC) form-based motion control is one of the effective designs of simultaneous trajectory tracking and balance for underactuated balance robots. Under certain conditions, the EIC-based control design is shown to lead to uncontrolled robot motion. To overcome this issue, we present a Gaussian process (GP)-based data-driven learning control for underactuated balance robots with the EIC modeling structure. Two GP-based learning controllers are presented by using the EIC property. The partial EIC (PEIC)-based control design partitions the robotic dynamics into a fully actuated subsystem and a reduced-order underactuated subsystem. The null-space EIC (NEIC)-based control compensates for the uncontrolled motion in a subspace, while the other closed-loop dynamics are not affected. Under the PEIC- and NEIC-based, the tracking and balance tasks are guaranteed, and convergence rate and bounded errors are achieved without causing any uncontrolled motion by the original EIC-based control. We validate the results and demonstrate the GP-based learning control design using two inverted pendulum platforms. 
    more » « less