skip to main content


Title: Functional estimation of perturbed positive real infinite dimensional systems using adaptive compensators
This paper extends earlier results on the adaptive estimation of nonlinear terms in finite dimensional systems utilizing a reproducing kernel Hilbert space to a class of positive real infinite dimensional systems. The simplest class of strictly positive real infinite dimensional systems has collocated input and output operators with the state operator being the generator of an exponentially stable C 0 semigroup on the state space X . The parametrization of the nonlinear term is considered in a reproducing kernel Hilbert space Q and together with the adaptive observer, results in an evolution system considered in X × Q. Using Lyapunov-redesign methods, the adaptive laws for the parameter estimates are derived and the well-posedness of the resulting evolution error system is summarized. The adaptive estimate of the unknown nonlinearity is subsequently used to compensate for the nonlinearity. A special case of finite dimensional systems with an embedded reproducing kernel Hilbert space to handle the nonlinear term is also considered and the convergence results are summarized. A numerical example on a one-dimensional diffusion equation is considered.  more » « less
Award ID(s):
1825546
NSF-PAR ID:
10195590
Author(s) / Creator(s):
Date Published:
Journal Name:
2020 American Control Conference (ACC)
Page Range / eLocation ID:
1582 to 1587
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. . (Ed.)
    This paper proposes a new approach for the adaptive functional estimation of second order infinite dimensional systems with structured perturbations. First, the proposed observer is formulated in the natural second order setting thus ensuring the time derivative of the estimated position is the estimated velocity, and therefore called natural adaptive observer. Assuming that the system does not yield a positive real system when placed in first order form, then the next step in deriving parameter adaptive laws is to assume a form of input-output collocation. Finally, to estimate structured perturbations taking the form of functions of the position and/or velocity outputs, the parameter space is not identified by a finite dimensional Euclidean space but instead is considered in a Reproducing Kernel Hilbert Space. Such a setting allows one not to be restricted by a priori assumptions on the dimension of the parameter spaces. Convergence of the position and velocity errors in their respective norms is established via the use of a parameter-dependent Lyapunov function, specifically formulated for second order infinite dimensional systems that include appropriately defined norms of the functional errors in the reproducing kernel Hilbert spaces. Boundedness of the functional estimates immediately follow and via an appropriate definition of a persistence of excitation condition for functional estimation, a functional convergence follows. When the system is governed by vector second order dynamics, all abstract spaces for the state evolution collapse to a Euclidean space and the natural adaptive observer results simplify. Numerical results of a second order PDE and a multi-degree of freedom finite dimensional mechanical system are presented. 
    more » « less
  2. This paper presents an adaptive functional estimation scheme for the fault detection and diagnosis of nonlinear faults in positive real infinite dimensional systems. The system is assumed to satisfy a positive realness condition and the fault, taking the form of a nonlinear function of the output, is assumed to enter the system at an unknown time. The proposed detection and diagnostic observer utilizes a Reproducing Kernel Hilbert Space as the parameter space and via a Lyapunov redesign approach, the learning scheme for the unknown functional is used for the detection of the fault occurrence, the diagnosis of the fault and finally its accommodation via an adaptive control reconfiguration. Results on parabolic PDEs with either boundary or in-domain actuation and sensing are included. 
    more » « less
  3. Abstract

    Models of nonlinear quantum computation based on deterministic positive trace‐preserving (PTP) channels and evolution equations are investigated. The models are defined in any finite Hilbert space, but the main results are for dimension . For every normalizable linear or nonlinear positive map ϕ on bounded linear operatorsX, there is an associated normalized PTP channel . Normalized PTP channels include unitary mean field theories, such as the Gross–Pitaevskii equation for interacting bosons, as well as models of linear and nonlinear dissipation. They classify into four types, yielding three distinct forms of nonlinearity whose computational power are explored. In the qubit case, these channels support Bloch ball torsion and other distortions studied previously, where it has been shown that such nonlinearity can be used to increase the separation between a pair of close qubit states, suggesting an exponential speedup for state discrimination. Building on this idea, the authors argue that this operation can be made robust to noise by using dissipation to induce a bifurcation to a novel phase where a pair of attracting fixed points create an intrinsically fault‐tolerant nonlinear state discriminator.

     
    more » « less
  4. Principal Component Analysis (PCA) and Kernel Principal Component Analysis (KPCA) are fundamental methods in machine learning for dimensionality reduction. The former is a technique for finding this approximation in finite dimensions and the latter is often in an infinite dimensional Reproducing Kernel Hilbert-space (RKHS). In this paper, we present a geometric framework for computing the principal linear subspaces in both situations as well as for the robust PCA case, that amounts to computing the intrinsic average on the space of all subspaces: the Grassmann manifold. Points on this manifold are defined as the subspaces spanned by K -tuples of observations. The intrinsic Grassmann average of these subspaces are shown to coincide with the principal components of the observations when they are drawn from a Gaussian distribution. We show similar results in the RKHS case and provide an efficient algorithm for computing the projection onto the this average subspace. The result is a method akin to KPCA which is substantially faster. Further, we present a novel online version of the KPCA using our geometric framework. Competitive performance of all our algorithms are demonstrated on a variety of real and synthetic data sets. 
    more » « less
  5. Abstract

    The paper introduces a new kernel-based Maximum Mean Discrepancy (MMD) statistic for measuring the distance between two distributions given finitely many multivariate samples. When the distributions are locally low-dimensional, the proposed test can be made more powerful to distinguish certain alternatives by incorporating local covariance matrices and constructing an anisotropic kernel. The kernel matrix is asymmetric; it computes the affinity between $n$ data points and a set of $n_R$ reference points, where $n_R$ can be drastically smaller than $n$. While the proposed statistic can be viewed as a special class of Reproducing Kernel Hilbert Space MMD, the consistency of the test is proved, under mild assumptions of the kernel, as long as $\|p-q\| \sqrt{n} \to \infty $, and a finite-sample lower bound of the testing power is obtained. Applications to flow cytometry and diffusion MRI datasets are demonstrated, which motivate the proposed approach to compare distributions.

     
    more » « less