skip to main content


Title: Consolidation and retention of auditory categories acquired incidentally in performing a visuomotor task
A wealth of evidence indicates the existence of a consolidation phase, triggered by and following a practice session, wherein new memory traces relevant to task performance are transformed and honed to represent new knowledge. But, the role of consolidation is not well-understood in category learning and has not been studied at all under incidental category learning conditions. Here, we examined the acquisition, consolidation and retention phases in a visuomotor task wherein auditory category information was available, but not required, to guide detection of an above-threshold visual target across one of four spatial locations. We compared two training conditions: (1) Constant, whereby repeated instances of one exemplar from an auditory category preceded a visual target, predicting its upcoming location; (2) Variable, whereby five distinct category exemplars predicted the visual target. Visual detection speed and accuracy, as well as the performance cost of randomizing the association of auditory category to visual target location, were assessed during online performance, again after a 24-hour delay to assess the expression of delayed gains, and after 10 days to assess retention. Results revealed delayed gains associated with incidental auditory category learning and retention effects for both training conditions. Offline processes can be triggered even for incidental auditory input and lead to category learning; variability of input can enhance the generation of incidental auditory category learning.  more » « less
Award ID(s):
1655126
NSF-PAR ID:
10085202
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the 40th Annual Conference of the Cognitive Science Society
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Humans generate categories from complex regularities evolving across even imperfect sensory input. Here, we examined the possibility that incidental experiences can generate lasting category knowledge. Adults practiced a simple visuomotor task not dependent on acoustic input. Novel categories of acoustically complex sounds were not necessary for task success but aligned incidentally with distinct visuomotor responses in the task. Incidental sound category learning emerged robustly when within-category sound exemplar variability was closely yoked to visuomotor task demands and was not apparent in the initial session when this coupling was less robust. Nonetheless, incidentally acquired sound category knowledge was evident in both cases one day later, indicative of offline learning gains and, nine days later, learning in both cases supported explicit category labeling of novel sounds. Thus, a relatively brief incidental experience with multi-dimensional sound patterns aligned with behaviorally relevant actions and events can generate new sound categories, immediately after the learning experience or a day later. These categories undergo consolidation into long-term memory to support robust generalization of learning, rather than simply reflecting recall of specific sound-pattern exemplars previously encountered. Humans thus forage for information to acquire and consolidate new knowledge that may incidentally support behavior, even when learning is not strictly necessary for performance. 
    more » « less
  2. null (Ed.)
    Category learning is fundamental to cognition, but little is known about how it proceeds in real-world environments when learners do not have instructions to search for category-relevant information, do not make overt category decisions, and do not experience direct feedback. Prior research demonstrates that listeners can acquire task-irrelevant auditory categories incidentally as they engage in primarily visuomotor tasks. The current study examines the factors that support this incidental category learning. Three experiments systematically manipulated the relationship of four novel auditory categories with a consistent visual feature (color or location) that informed a simple behavioral keypress response regarding the visual feature. In both an in-person experiment and two online replications with extensions, incidental auditory category learning occurred reliably when category exemplars consistently aligned with visuomotor demands of the primary task, but not when they were misaligned. The presence of an additional irrelevant visual feature that was uncorrelated with the primary task demands neither enhanced nor harmed incidental learning. By contrast, incidental learning did not occur when auditory categories were aligned consistently with one visual feature, but the motor response in the primary task was aligned with another, category-unaligned visual feature. Moreover, category learning did not reliably occur across passive observation or when participants made a category-nonspecific, generic motor response. These findings show that incidental learning of categories is strongly mediated by the character of coincident behavior. 
    more » « less
  3. The environment provides multiple regularities that might be useful in guiding behavior if one was able to learn their structure. Understanding statistical learning across simultaneous regularities is important, but poorly understood. We investigate learning across two domains: visuomotor sequence learning through the serial reaction time (SRT) task, and incidental auditory category learning via the systematic multimodal association reaction time (SMART) task. Several commonalities raise the possibility that these two learning phenomena may draw on common cognitive resources and neural networks. In each, participants are unin- formed of the regularities that they come to use to guide actions, the outcomes of which may provide a form of internal feedback. We used dual-task conditions to compare learning of the regularities in isolation versus when they are simultaneously available to support behavior on a seemingly orthogonal visuomotor task. Learning occurred across the simultaneous regularities, without attenuation even when the informational value of a regularity was reduced by the presence of the additional, convergent regularity. Thus, the simultaneous regularities do not compete for associative strength, as in overshadowing effects. Moreover, the visuomotor sequence learning and incidental auditory category learning do not appear to compete for common cognitive resources; learning across the simultaneous regularities was comparable to learning each regularity in isolation. 
    more » « less
  4. Motor learning in visuomotor adaptation tasks results from both explicit and implicit processes, each responding differently to an error signal. Although the motor output side of these processes has been extensively studied, the visual input side is relatively unknown. We investigated if and how depth perception affects the computation of error information by explicit and implicit motor learning. Two groups of participants made reaching movements to bring a virtual cursor to a target in the frontoparallel plane. The Delayed group was allowed to reaim and their feedback was delayed to emphasize explicit learning, whereas the camped group received task-irrelevant clamped cursor feedback and continued to aim straight at the target to emphasize implicit adaptation. Both groups played this game in a highly detailed virtual environment (depth condition), leveraging a cover task of playing darts in a virtual tavern, and in an empty environment (no-depth condition). The delayed group showed an increase in error sensitivity under depth relative to no-depth. In contrast, the clamped group adapted to the same degree under both conditions. The movement kinematics of the delayed participants also changed under the depth condition, consistent with the target appearing more distant, unlike the Clamped group. A comparison of the delayed behavioral data with a perceptual task from the same individuals showed that the greater reaiming in the depth condition was consistent with an increase in the scaling of the error distance and size. These findings suggest that explicit and implicit learning processes may rely on different sources of perceptual information. NEW & NOTEWORTHY We leveraged a classic sensorimotor adaptation task to perform a first systematic assessment of the role of perceptual cues in the estimation of an error signal in the 3-D space during motor learning. We crossed two conditions presenting different amounts of depth information, with two manipulations emphasizing explicit and implicit learning processes. Explicit learning responded to the visual conditions, consistent with perceptual reports, whereas implicit learning appeared to be independent of them. 
    more » « less
  5. null (Ed.)
    Computer-aided diagnosis (CAD) systems must constantly cope with the perpetual changes in data distribution caused by different sensing technologies, imaging protocols, and patient populations. Adapting these systems to new domains often requires significant amounts of labeled data for re-training. This process is labor-intensive and time-consuming. We propose a memory-augmented capsule network for the rapid adaptation of CAD models to new domains. It consists of a capsule network that is meant to extract feature embeddings from some high-dimensional input, and a memory-augmented task network meant to exploit its stored knowledge from the target domains. Our network is able to efficiently adapt to unseen domains using only a few annotated samples. We evaluate our method using a large-scale public lung nodule dataset (LUNA), coupled with our own collected lung nodules and incidental lung nodules datasets. When trained on the LUNA dataset, our network requires only 30 additional samples from our collected lung nodule and incidental lung nodule datasets to achieve clinically relevant performance (0.925 and 0.891 area under receiving operating characteristic curves (AUROC), respectively). This result is equivalent to using two orders of magnitude less labeled training data while achieving the same performance. We further evaluate our method by introducing heavy noise, artifacts, and adversarial attacks. Under these severe conditions, our network’s AUROC remains above 0.7 while the performance of state-of-the-art approaches reduce to chance level 
    more » « less