skip to main content

Search for: All records

Creators/Authors contains: "Sznaier, M."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Koopman operators provide tractable means of learning linear approximations of non-linear dynamics. Many approaches have been proposed to find these operators, typically based upon approximations using an a-priori fixed class of models. However, choosing appropriate models and bounding the approximation error is far from trivial. Motivated by these difficulties, in this paper we propose an optimization based approach to learning Koopman operators from data. Our results show that the Koopman operator, the associated Hilbert space of observables and a suitable dictionary can be obtained by solving two rank-constrained semi-definite programs (SDP). While in principle these problems are NP-hard, the usemore »of standard relaxations of rank leads to convex SDPs.« less
  2. This paper proposes a data-driven framework to address the worst-case estimation problem for switched discrete-time linear systems based solely on the measured data (input & output) and an ℓ ∞ bound over the noise. We start with the problem of designing a worst-case optimal estimator for a single system and show that this problem can be recast as a rank minimization problem and efficiently solved using standard relaxations of rank. Then we extend these results to the switched case. Our main result shows that, when the mode variable is known, the problem can be solved proceeding in a similar manner.more »To address the case where the mode variable is unmeasurable, we impose the hybrid decoupling constraint(HDC) in order to reformulate the original problem as a polynomial optimization which can be reduced to a tractable convex optimization using moments-based techniques.« less
  3. This paper addresses the problem of identification of error in variables switched linear models from experimental input/output data. This problem is known to be generically NP hard and thus computationally expensive to solve. To address this difficulty, several relaxations have been proposed in the past few years. While solvable in polynomial time these (convex) relaxations tend to scale poorly with the number of points and number/order of the subsystems, effectively limiting their applicability to scenarios with relatively small number of data points. To address this difficulty, in this paper we propose an efficient method that only requires performing (number ofmore »subsystems) singular value decompositions of matrices whose size is independent of the number of points. The underlying idea is to obtain a sum-of-squares polynomial approximation of the support of each subsystem one-at-a-time, and use these polynomials to segment the data into sets, each generated by a single subsystem. As shown in the paper, exploiting ideas from Christoffel's functions allows for finding these polynomial approximations simply by performing SVDs. The parameters of each subsystem can then be identified from the segmented data using existing error-in-variables (EIV) techniques.« less
  4. Kernel dimensionality reduction (KDR) algorithms find a low dimensional representation of the original data by optimizing kernel dependency measures that are capable of capturing nonlinear relationships. The standard strategy is to first map the data into a high dimensional feature space using kernels prior to a projection onto a low dimensional space. While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret. Alternatively, Interpretable KDR (IKDR) is different in that it projects onto a subspace \textit{before} the kernel feature mapping, therefore, the projection matrix canmore »indicate how the original features linearly combine to form the new features. Unfortunately, the IKDR objective requires a non-convex manifold optimization that is difficult to solve and can no longer be solved by eigendecomposition. Recently, an efficient iterative spectral (eigendecomposition) method (ISM) has been proposed for this objective in the context of alternative clustering. However, ISM only provides theoretical guarantees for the Gaussian kernel. This greatly constrains ISM's usage since any kernel method using ISM is now limited to a single kernel. This work extends the theoretical guarantees of ISM to an entire family of kernels, thereby empowering ISM to solve any kernel method of the same objective. In identifying this family, we prove that each kernel within the family has a surrogate Φ matrix and the optimal projection is formed by its most dominant eigenvectors. With this extension, we establish how a wide range of IKDR applications across different learning paradigms can be solved by ISM. To support reproducible results, the source code is made publicly available on \url{https://github.com/ANONYMIZED}« less
  5. In this paper we propose a data-driven fault detection framework for semi-supervised scenarios where labeled training data from the system under consideration (the “target”) is imbalanced (e.g. only relatively few labels are available from one of the classes), but data from a related system (the “source”) is readily available. An example of this situation is when a generic simulator is available, but needs to be tuned on a case-by-case basis to match the parameters of the actual system. The goal of this paper is to work with the statistical distribution of the data without necessitating system identification. Our main resultmore »shows that if the source and target domain are related by a linear transformation (a common assumption in domain adaptation), the problem of designing a classifier that minimizes a miss-classification loss over the joint source and target domains reduces to a convex optimization subject to a single (non-convex) equality constraint. This second-order equality constraint can be recast as a rank-1 optimization problem, where the rank constraint can be efficiently handled through a reweighted nuclear norm surrogate. These results are illustrated with a practical application: fault detection in additive manufacturing (industrial 3D printing). The proposed method is able to exploit simulation data (source domain) to substantially outperform classifiers tuned using only data from a single domain.« less
  6. We propose a generalization of the popular nonlinear ARX model structure by treating its parameters as varying over time. The parameters are considered generated by linear filters operating on the model’s regressors. The filters are expressed as a sum of atoms that are either sum of damped exponentials and sinusoids, or sinusoids with time varying frequencies. This form allows us to enforce stability of the parameter evolution as well as leverage the atomic norm minimization framework for inducing sparsity. It also facilitates easy incorporation of smoothness related priors that that making it possible to treat these models as nonlinear extensionsmore »of the familiar LPV models.« less