skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Combining Learning and Model Based Control: Case Study for Single-Input Lotka-Volterra System
A hybrid control architecture for nonlinear dynamical systems is described which combines the advantages of model based control with those of real-time learning. The idea is to generate input-output data from an error system involving the plant and a proposed model. A discretized Chen-Fliess functional series is then identified from this data and used in conjunction with the model for predictive control. This method builds on the authors’ previous work on model-free control of a single-input, single-output Lotka-Volterra system.The problem is revisited here, but now with the introduction of a model for the dynamics. The single-input, multiple-output version of the problem is also investigated as a way to enhance closed-loop performance  more » « less
Award ID(s):
1839378
PAR ID:
10108074
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ... American Control Conference
ISSN:
2378-5861
Page Range / eLocation ID:
928-933
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract This paper is concerned with solving, from the learning-based decomposition control viewpoint, the problem of output tracking with nonperiodic tracking–transition switching. Such a nontraditional tracking problem occurs in applications where sessions for tracking a given desired trajectory are alternated with those for transiting the output with given boundary conditions. It is challenging to achieve precision tracking while maintaining smooth tracking–transition switching, as postswitching oscillations can be induced due to the mismatch of the boundary states at the switching instants, and the tracking performance can be limited by the nonminimum-phase (NMP) zeros of the system and effected by factors such as input constraints and external disturbances. Although recently an approach by combining the system-inversion with optimization techniques has been proposed to tackle these challenges, modeling of the system dynamics and complicated online computation are needed, and the controller obtained can be sensitive to model uncertainties. In this work, a learning-based decomposition control technique is developed to overcome these limitations. A dictionary of input–output bases is constructed offline a priori via data-driven iterative learning first. The input–output bases are used online to decompose the desired output in the tracking sessions and design an optimal desired transition trajectory with minimal transition time under input-amplitude constraint. Finally, the control input is synthesized based on the superpositioning principle and further optimized online to account for system variations and external disturbance. The proposed approach is illustrated through a nanopositioning control experiment on a piezoelectric actuator. 
    more » « less
  2. Function approximation from input and output data pairs constitutes a fundamental problem in supervised learning. Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a general nonlinear system, as they have proven to be very effective in approximating complex highly nonlinear functions. In this work, we show that identifying a general nonlinear function y = ƒ(x1,…,xN) from input-output examples can be formulated as a tensor completion problem and under certain conditions provably correct nonlinear system identification is possible. Specifically, we model the interactions between the N input variables and the scalar output of a system by a single N-way tensor, and setup a weighted low-rank tensor completion problem with smoothness regularization which we tackle using a block coordinate descent algorithm. We extend our method to the multi-output setting and the case of partially observed data, which cannot be readily handled by neural networks. Finally, we demonstrate the effectiveness of the approach using several regression tasks including some standard benchmarks and a challenging student grade prediction task. 
    more » « less
  3. The problem of controller reduction has a rich history in control theory. Yet, many questions remain open. In particular, there exist very few results on the order reduction of general non-observer based controllers and the subsequent quantification of the closed-loop performance. Recent developments in model-free policy optimization for Linear Quadratic Gaussian (LQG) control have highlighted the importance of this question. In this paper, we first propose a new set of sufficient conditions ensuring that a perturbed controller remains internally stabilizing. Based on this result, we illustrate how to perform order reduction of general (non-observer based) output feedback controllers using balanced truncation and modal truncation. We also provide explicit bounds on the LQG performance of the reduced-order controller. Furthermore, for single-input-single-output (SISO) systems, we introduce a new controller reduction technique by truncating unstable modes. We illustrate our theoretical results with numerical simulations. Our results will serve as valuable tools to design direct policy search algorithms for control problems with partial observations. 
    more » « less
  4. With the requirements to enable data analytics and exploration interactively and efficiently, progressive data processing, especially progressive join, became essential to data science. Join queries are particularly challenging due to the correlation between input datasets which causes the results to be biased towards some join keys. Existing methods carefully control which parts of the input to process in order to improve the quality of progressive results. If the quality is not satisfactory, they will process more data to improve the result. In this paper, we propose an alternative approach that initially seems counter-intuitive but surprisingly works very well. After query processing, we intentionally report fewer results to the user with the goal of improving the quality. The key idea is that if the output is deviated from the correct distribution, we temporarily hide some results to correct the bias. As we process more data, the hidden results are inserted back until the full dataset is processed. The main challenge is that we do not know the correct output distribution while the progressive query is running. In this work, we formally define the progressive join problem with quality and progressive result rate constraints. We propose an input&output quality-aware progressive join framework (QPJ) that (1) provides input control that decides which parts of the input to process; (2) estimates the final result distribution progressively; (3) automatically controls the quality of the progressive output rate; and (4) combines input&output control to enable quality control of the progressive results. We compare QPJ with existing methods and show QPJ can provide the progressive output that can represent the final answer better than existing methods. 
    more » « less
  5. With the requirements to enable data analytics and exploration interactively and efficiently, progressive data processing, especially progressive join, became essential to data science. Join queries are particularly challenging due to the correlation between input datasets which causes the results to be biased towards some join keys. Existing methods carefully control which parts of the input to process in order to improve the quality of progressive results. If the quality is not satisfactory, they will process more data to improve the result. In this paper, we propose an alternative approach that initially seems counter-intuitive but surprisingly works very well. After query processing, we intentionally report fewer results to the user with the goal of improving the quality. The key idea is that if the output is deviated from the correct distribution, we temporarily hide some results to correct the bias. As we process more data, the hidden results are inserted back until the full dataset is processed. The main challenge is that we do not know the correct output distribution while the progressive query is running. In this work, we formally define the progressive join problem with quality and progressive result rate constraints. We propose an input&output quality-aware progressive join framework (QPJ) that (1) provides input control that decides which parts of the input to process; (2) estimates the final result distribution progressively; (3) automat- ically controls the quality of the progressive output rate; and (4) combines input&output control to enable quality control of the progressive results. We compare QPJ with existing methods and show QPJ can provide the progressive output that can represent the final answer better than existing methods. 
    more » « less