A two‐time‐scale system involves both fast and slow dynamics. This article studies observer design for general nonlinear two‐time‐scale systems and presents two alternative nonlinear observer design approaches, one full‐order and one reduced‐order. The full‐order observer is designed by following a scheme to systematically select design parameters, so that the fast and slow observer dynamics are assigned to estimate the corresponding system modes. The reduced‐order observer is derived based on a lower dimensional model to reconstruct the slow states, along with the algebraic slow‐motion invariant manifold function to reconstruct the fast states. Through an error analysis, it is shown that the reduced‐order observer is capable of providing accurate estimation of the states for the detailed system with an exponentially decaying estimation error. In the last part of the article, the two proposed observers are designed for an anaerobic digestion process, as an illustrative example to evaluate their performance and convergence properties.
- Award ID(s):
- 2108856
- PAR ID:
- 10329145
- Date Published:
- Journal Name:
- Proceedings of the National Academy of Sciences of the United States of America
- Volume:
- 118
- Issue:
- 48
- ISSN:
- 0027-8424
- Page Range / eLocation ID:
- e2113650118
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Geophysical turbulence has a wide range of spatiotemporal scales that requires a multiscale prediction model for efficient and fast simulations. Stochastic parameterization is a class of multiscale methods that approximates the large-scale behaviors of the turbulent system without relying on scale separation. In the stochastic parameterization of unresolved subgrid-scale dynamics, there are several modeling parameters to be determined by tuning or fitting to data. We propose a strategy to estimate the modeling parameters in the stochastic parameterization of geostrophic turbulent systems. The main idea of the proposed approach is to generate data in a spatiotemporally local domain and use physical/statistical information to estimate the modeling parameters. In particular, we focus on the estimation of modeling parameters in the stochastic superparameterization, a variant of the stochastic parameterization framework, for an idealized model of synoptic-scale turbulence in the atmosphere and oceans. The test regimes considered in this study include strong and moderate turbulence with complicated patterns of waves, jets, and vortices.more » « less
-
A general, variational approach to derive low-order reduced models from possibly non-autonomous systems is presented. The approach is based on the concept of optimal parameterizing manifold (OPM) that substitutes more classical notions of invariant or slow manifolds when the breakdown of “slaving” occurs, i.e., when the unresolved variables cannot be expressed as an exact functional of the resolved ones anymore. The OPM provides, within a given class of parameterizations of the unresolved variables, the manifold that averages out optimally these variables as conditioned on the resolved ones. The class of parameterizations retained here is that of continuous deformations of parameterizations rigorously valid near the onset of instability. These deformations are produced through the integration of auxiliary backward–forward systems built from the model’s equations and lead to analytic formulas for parameterizations. In this modus operandi, the backward integration time is the key parameter to select per scale/variable to parameterize in order to derive the relevant parameterizations which are doomed to be no longer exact away from instability onset due to the breakdown of slaving typically encountered, e.g., for chaotic regimes. The selection criterion is then made through data-informed minimization of a least-square parameterization defect. It is thus shown through optimization of the backward integration time per scale/variable to parameterize, that skilled OPM reduced systems can be derived for predicting with accuracy higher-order critical transitions or catastrophic tipping phenomena, while training our parameterization formulas for regimes prior to these transitions takes place.
-
Short-term motor adaptation to novel movement dynamics has been shown to involve at least two concurrent learning processes: a slow process that responds weakly to error but retains information well, and a fast process that responds strongly to error but has poor retention. This modeling framework can explain several properties of motion-dependent motor adaptation (e.g., 24-hour retention). An important assumption of this computational framework is that learning is only based on the experienced movement error, and the effect of noise (either internally generated or externally applied) is not considered. We examined the respective error sensitivity by quantifying adaptation in three subject groups distinguished by the noise added to the motion-dependent perturbation (magnitudes of 0, 3 or 7N, at a frequency of 10 Hz, 20 subjects/group). We assessed the feedforward adaptive changes in motor output and examined the adaptation rate, retention and decay of learning. Applying a two-state modeling framework showed that the applied noise during training mainly affected the fast learning process, with the slow process largely unaffected; participants in the higher noise groups demonstrated a reduced force profile following training, but the decay rate across groups was similar, suggesting that the slow process was unimpaired across conditions. Collectively, our results provide evidence that noise significantly decreases motor adaptation, but this reduction may be due to its influence over specific learning mechanisms. Importantly, this may have implications for how the motor system compensates for random fluctuations, especially when affected by brain disorders that result in movement tremor (e.g., Essential Tremor).
Significance statement Short-term motor adaptation to novel movement dynamics has been shown to involve at least two concurrent learning processes: a slow process that responds weakly to error but retains information well, and a fast process that responds strongly to error but has poor retention. This computational framework assumes that learning is only based on the movement error, and the effect of noise is not considered. We found that as the magnitude of externally-generated noise increased, the overall learning rate decreased. We found that this overall decrease in adaptation could be explained specifically by impairments to the fast learning process. The applied motor noise had little effect on the retention and decay of adaptation— aspects that mainly involve the slow learning process. -
Abstract Stochastic parameterizations account for uncertainty in the representation of unresolved subgrid processes by sampling from the distribution of possible subgrid forcings. Some existing stochastic parameterizations utilize data‐driven approaches to characterize uncertainty, but these approaches require significant structural assumptions that can limit their scalability. Machine learning models, including neural networks, are able to represent a wide range of distributions and build optimized mappings between a large number of inputs and subgrid forcings. Recent research on machine learning parameterizations has focused only on deterministic parameterizations. In this study, we develop a stochastic parameterization using the generative adversarial network (GAN) machine learning framework. The GAN stochastic parameterization is trained and evaluated on output from the Lorenz '96 model, which is a common baseline model for evaluating both parameterization and data assimilation techniques. We evaluate different ways of characterizing the input noise for the model and perform model runs with the GAN parameterization at weather and climate time scales. Some of the GAN configurations perform better than a baseline bespoke parameterization at both time scales, and the networks closely reproduce the spatiotemporal correlations and regimes of the Lorenz '96 system. We also find that, in general, those models which produce skillful forecasts are also associated with the best climate simulations.