skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Continual learning: a feature extraction formalization, an efficient algorithm, and barriers
Continual learning is an emerging paradigm in machine learning, wherein a model is exposed in an online fashion to data from multiple different distributions (i.e. environments), and is expected to adapt to the distribution change. Precisely, the goal is to perform well in the new environment, while simultaneously retaining the performance on the previous environments (i.e. avoid “catastrophic forgetting”). While this setup has enjoyed a lot of attention in the applied community, there hasn’t be theoretical work that even formalizes the desired guarantees. In this paper, we propose a framework for continual learning through the framework of feature extraction—namely, one in which features, as well as a classifier, are being trained with each environment. When the features are linear, we design an efficient gradient-based algorithm DPGrad, that is guaranteed to perform well on the current environment, as well as avoid catastrophic forgetting. In the general case, when the features are non-linear, we show such an algorithm cannot exist, whether efficient or not.  more » « less
Award ID(s):
2211907
PAR ID:
10450560
Author(s) / Creator(s):
Date Published:
Journal Name:
Advances in neural information processing systems
ISSN:
1049-5258
Page Range / eLocation ID:
28414-28427
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Federated continual learning is a decentralized approach that enables edge devices to continuously learn new data, mitigating catastrophic forgetting while collaboratively training a global model. However, existing state-of-the-art approaches in federated continual learning focus primarily on learning continuously to classify discrete sets of images, leaving dense regression tasks such as depth estimation unaddressed. Furthermore, autonomous agents that use depth estimation to explore dynamic indoor environments inevitably encounter spatial and temporal shifts in data distributions. These shifts trigger a phenomenon called spatio-temporal catastrophic forgetting, a more complex and challenging form of catastrophic forgetting. In this paper, we address the fundamental research question: “Can we mitigate spatiotemporal catastrophic forgetting in federated continual learning for depth estimation in dynamic indoor environments?”. To address this question, we propose Local Online and Continual Adaptation (LOCA), the first approach to address spatio-temporal catastrophic forgetting in dynamic indoor environments. LOCA relies on two key algorithmic innovations: online batch skipping and continual local aggregation. Our extensive experiments show that LOCA mitigates spatio-temporal catastrophic forgetting and improves global model performance, while running on-device up to 3.35× faster and consuming 3.13× less energy compared to state-of-the-art. Thus, LOCA lays the groundwork for scalable autonomous systems that adapt in real time to learn private and dynamic indoor environments. 
    more » « less
  2. Abstract Catastrophic forgetting remains an outstanding challenge in continual learning. Recently, methods inspired by the brain, such as continual representation learning and memory replay, have been used to combat catastrophic forgetting. Associative learning (retaining associations between inputs and outputs, even after good representations are learned) plays an important function in the brain; however, its role in continual learning has not been carefully studied. Here, we identified a two-layer neural circuit in the fruit fly olfactory system that performs continual associative learning between odors and their associated valences. In the first layer, inputs (odors) are encoded using sparse, high-dimensional representations, which reduces memory interference by activating nonoverlapping populations of neurons for different odors. In the second layer, only the synapses between odor-activated neurons and the odor’s associated output neuron are modified during learning; the rest of the weights are frozen to prevent unrelated memories from being overwritten. We prove theoretically that these two perceptron-like layers help reduce catastrophic forgetting compared to the original perceptron algorithm, under continual learning. We then show empirically on benchmark data sets that this simple and lightweight architecture outperforms other popular neural-inspired algorithms when also using a two-layer feedforward architecture. Overall, fruit flies evolved an efficient continual associative learning algorithm, and circuit mechanisms from neuroscience can be translated to improve machine computation. 
    more » « less
  3. As an alternative to resource-intensive deep learning approaches to the continual learning problem, we propose a simple, fast algorithm inspired by adaptive resonance theory (ART). To cope with the curse of dimensionality and avoid catastrophic forgetting, we apply incremental principal component analysis (IPCA) to the model’s previously learned weights. Experiments show that this approach approximates the performance achieved using static PCA and is competitive with continual deep learning methods. Our implementation is available on https://github.com/neil-ash/ART-IPCA. 
    more » « less
  4. As an alternative to resource-intensive deep learning approaches to the continual learning problem, we propose a simple, fast algorithm inspired by adaptive resonance theory (ART). To cope with the curse of dimensionality and avoid catastrophic forgetting, we apply incremental principal component analysis (IPCA) to the model's previously learned weights. Experiments show that this approach approximates the performance achieved using static PCA and is competitive with continual deep learning methods. Our implementation is available on https://github.com/neil-ash/ART-IPCA 
    more » « less
  5. Supervised Continual learning involves updating a deep neural network (DNN) from an ever-growing stream of labeled data. While most work has focused on overcoming catastrophic forgetting, one of the major motivations behind continual learning is being able to efficiently update a network with new information, rather than retraining from scratch on the training dataset as it grows over time. Despite recent continual learning methods largely solving the catastrophic forgetting problem, there has been little attention paid to the efficiency of these algorithms. Here, we study recent methods for incremental class learning and illustrate that many are highly inefficient in terms of compute, memory, and storage. Some methods even require more compute than training from scratch! We argue that for continual learning to have real-world applicability, the research community cannot ignore the resources used by these algorithms. There is more to continual learning than mitigating catastrophic forgetting. 
    more » « less