<?xml version="1.0" encoding="UTF-8"?><rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:dcq="http://purl.org/dc/terms/"><records count="1" morepages="false" start="1" end="1"><record rownumber="1"><dc:product_type>Conference Paper</dc:product_type><dc:title>A SIMPLE, FAST ALGORITHM FOR CONTINUAL LEARNING FROM HIGH-DIMENSIONAL DATA</dc:title><dc:creator>Ashtekar, Neil; Honavar, Vasant G</dc:creator><dc:corporate_author/><dc:editor/><dc:description>As an alternative to resource-intensive deep learning approaches to the continual
learning problem, we propose a simple, fast algorithm inspired by adaptive resonance
theory (ART). To cope with the curse of dimensionality and avoid catastrophic
forgetting, we apply incremental principal component analysis (IPCA)
to the model’s previously learned weights. Experiments show that this approach
approximates the performance achieved using static PCA and is competitive
with continual deep learning methods. Our implementation is available on
https://github.com/neil-ash/ART-IPCA.</dc:description><dc:publisher>Open Review</dc:publisher><dc:date>2023-05-30</dc:date><dc:nsf_par_id>10561070</dc:nsf_par_id><dc:journal_name/><dc:journal_volume/><dc:journal_issue/><dc:page_range_or_elocation/><dc:issn/><dc:isbn/><dc:doi>https://doi.org/</dc:doi><dcq:identifierAwardId>2041759</dcq:identifierAwardId><dc:subject>continual learning</dc:subject><dc:version_number/><dc:location/><dc:rights/><dc:institution/><dc:sponsoring_org>National Science Foundation</dc:sponsoring_org></record></records></rdf:RDF>