skip to main content


Title: Working Memory Augmentation for Improved Learning in Neural Adaptive Control
In this paper, we propose a novel control architecture, inspired from neuroscience, for adaptive control of continuous time systems. The objective here is to design control architectures and algorithms that can learn and adapt quickly to changes that are even abrupt. The proposed architecture, in the setting of standard neural network (NN) based adaptive control, augments an external working memory to the NN. The learning system stores, in its external working memory, recently observed feature vectors from the hidden layer of the NN that are relevant and forgets the older irrelevant values. It retrieves relevant vectors from the working memory to modify the final control signal generated by the controller. The use of external working memory improves the context inducing the learning system to search in a particular direction. This directed learning allows the learning system to find a good approximation of the unknown function even after abrupt changes quickly. We consider two classes of controllers for illustration of our ideas (i) a model reference NN adaptive controller for linear systems with matched uncertainty (ii) backstepping NN controller for strict feedback systems. Through extensive simulations and specific metrics we show that memory augmentation improves learning significantly even when the system undergoes sudden changes. Importantly, we also provide evidence for the proposed mechanism by which this specific memory augmentation improves learning.  more » « less
Award ID(s):
1839429
NSF-PAR ID:
10190799
Author(s) / Creator(s):
;
Date Published:
Journal Name:
2019 IEEE Conference on Decision and Control
Page Range / eLocation ID:
6785 to 6792
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. We introduced a working memory augmented adaptive controller in our recent work. The controller uses attention to read from and write to the working memory. Attention allows the controller to read specific information that is relevant and update its working memory with information based on its relevance, similar to how humans pick relevant information from the enormous amount of information that is received through various senses. The retrieved information is used to modify the final control input computed by the controller. We showed that this modification speeds up learning.In the above work, we used a soft-attention mechanism for the adaptive controller. Controllers that use soft attention update and read information from all memory locations at all the times, the extent of which is determined by their relevance. But, for the same reason, the information stored in the memory can be lost. In contrast, hard attention updates and reads from only one location at any point of time, which allows the memory to retain information stored in other locations. The downside is that the controller can fail to shift attention when the information in the current location becomes less relevant.We propose an attention mechanism that comprises of (i) a hard attention mechanism and additionally (ii) an attention reallocation mechanism. The attention reallocation enables the controller to reallocate attention to a different location when the relevance of the location it is reading from diminishes. The reallocation also ensures that the information stored in the memory before the shift in attention is retained which can be lost in both soft and hard attention mechanisms. Through detailed simulations of various scenarios for two link robot robot arm systems we illustrate the effectiveness of the proposed attention mechanism. 
    more » « less
  2. Abstract

    In this work, we propose a novel adaptive formation control architecture for a group of quadrotor systems, under line‐of‐sight (LOS) distance and relative distance constraints as well as attitude constraints, where the constraint requirements can be both asymmetric and time‐varying in nature. The LOS distance constraint consideration ensures that each quadrotor is not deviating too far away from its desired flight trajectory. The LOS relative inter‐quadrotor distance constraint is to guarantee that the LOS distance between any two quadrotors in the formation is neither too large (which may result in the loss of communication between quadrotors, for example) nor too small (which may result in collision between quadrotors, for example). The attitude constraints make sure that the roll, pitch, and yaw angles of each quadrotor do not deviate too much from the desired profile. Universal barrier functions are adopted in the controller design and analysis, which is a generic framework that can address system with different types of constraints in a unified controller architecture. Furthermore, each quadrotor's mass and inertia are unknown, and the system dynamics are subjected to time‐varying external disturbances. Through rigorous analysis, an exponential convergence rate can be guaranteed on the distance and attitude tracking errors, while all constraints are satisfied during the operation. A simulation example further demonstrates the efficacy of the proposed control framework.

     
    more » « less
  3. The monitoring of data streams with a network structure have drawn increasing attention due to its wide applications in modern process control. In these applications, high-dimensional sensor nodes are interconnected with an underlying network topology. In such a case, abnormalities occurring to any node may propagate dynamically across the network and cause changes of other nodes over time. Furthermore, high dimensionality of such data significantly increased the cost of resources for data transmission and computation, such that only partial observations can be transmitted or processed in practice. Overall, how to quickly detect abnormalities in such large networks with resource constraints remains a challenge, especially due to the sampling uncertainty under the dynamic anomaly occurrences and network-based patterns. In this paper, we incorporate network structure information into the monitoring and adaptive sampling methodologies for quick anomaly detection in large networks where only partial observations are available. We develop a general monitoring and adaptive sampling method and further extend it to the case with memory constraints, both of which exploit network distance and centrality information for better process monitoring and identification of abnormalities. Theoretical investigations of the proposed methods demonstrate their sampling efficiency on balancing between exploration and exploitation, as well as the detection performance guarantee. Numerical simulations and a case study on power network have demonstrated the superiority of the proposed methods in detecting various types of shifts. Note to Practitioners —Continuous monitoring of networks for anomalous events is critical for a large number of applications involving power networks, computer networks, epidemiological surveillance, social networks, etc. This paper aims at addressing the challenges in monitoring large networks in cases where monitoring resources are limited such that only a subset of nodes in the network is observable. Specifically, we integrate network structure information of nodes for constructing sequential detection methods via effective data augmentation, and for designing adaptive sampling algorithms to observe suspicious nodes that are likely to be abnormal. Then, the method is further generalized to the case that the memory of the computation is also constrained due to the network size. The developed method is greatly beneficial and effective for various anomaly patterns, especially when the initial anomaly randomly occurs to nodes in the network. The proposed methods are demonstrated to be capable of quickly detecting changes in the network and dynamically changes the sampling priority based on online observations in various cases, as shown in the theoretical investigation, simulations and case studies. 
    more » « less
  4. Byte-addressable non-volatile memory (NVM) is a promising technology that provides near-DRAM performance with scalable memory capacity. However, it requires atomic data durability to ensure memory persistency. Therefore, many techniques, including logging and shadow paging, have been proposed. However, most of them either introduce extra write traffic to NVM or suffer from significant performance overhead on the critical path of program execution, or even both. In this paper, we propose a transparent and efficient hardware-assisted out-of-place update (HOOP) mechanism that supports atomic data durability, without incurring much extra writes and performance overhead. The key idea is to write the updated data to a new place in NVM, while retaining the old data until the updated data becomes durable. To support this, we develop a lightweight indirection layer in the memory controller to enable efficient address translation and adaptive garbage collection for NVM. We evaluate HOOP with a variety of popular data structures and data-intensive applications, including key-value stores and databases. Our evaluation shows that HOOP achieves low critical-path latency with small write amplification, which is close to that of a native system without persistence support. Compared with state-of-the-art crash-consistency techniques, it improves application performance by up to 1.7×, while reducing the write amplification by up to 2.1×. HOOP also demonstrates scalable data recovery capability on multi-core systems. 
    more » « less
  5. null (Ed.)
    Systems experiencing high-rate dynamic events, termed high-rate systems, typically undergo accelerations of amplitudes higher than 100 g-force in less than 10 ms. Examples include adaptive airbag deployment systems, hypersonic vehicles, and active blast mitigation systems. Given their critical functions, accurate and fast modeling tools are necessary for ensuring the target performance. However, the unique characteristics of these systems, which consist of (1) large uncertainties in the external loads, (2) high levels of non-stationarities and heavy disturbances, and (3) unmodeled dynamics generated from changes in system configurations, in combination with the fast-changing environments, limit the applicability of physical modeling tools. In this paper, a deep learning algorithm is used to model high-rate systems and predict their response measurements. It consists of an ensemble of short-sequence long short-term memory (LSTM) cells which are concurrently trained. To empower multi-step ahead predictions, a multi-rate sampler is designed to individually select the input space of each LSTM cell based on local dynamics extracted using the embedding theorem. The proposed algorithm is validated on experimental data obtained from a high-rate system. Results showed that the use of the multi-rate sampler yields better feature extraction from non-stationary time series compared with a more heuristic method, resulting in significant improvement in step ahead prediction accuracy and horizon. The lean and efficient architecture of the algorithm results in an average computing time of 25 μμs, which is below the maximum prediction horizon, therefore demonstrating the algorithm’s promise in real-time high-rate applications. 
    more » « less