skip to main content

Title: The representational glue for incidental category learning is alignment with task-relevant behavior
Category learning is fundamental to cognition, but little is known about how it proceeds in real-world environments when learners do not have instructions to search for category-relevant information, do not make overt category decisions, and do not experience direct feedback. Prior research demonstrates that listeners can acquire task-irrelevant auditory categories incidentally as they engage in primarily visuomotor tasks. The current study examines the factors that support this incidental category learning. Three experiments systematically manipulated the relationship of four novel auditory categories with a consistent visual feature (color or location) that informed a simple behavioral keypress response regarding the visual feature. In both an in-person experiment and two online replications with extensions, incidental auditory category learning occurred reliably when category exemplars consistently aligned with visuomotor demands of the primary task, but not when they were misaligned. The presence of an additional irrelevant visual feature that was uncorrelated with the primary task demands neither enhanced nor harmed incidental learning. By contrast, incidental learning did not occur when auditory categories were aligned consistently with one visual feature, but the motor response in the primary task was aligned with another, category-unaligned visual feature. Moreover, category learning did not reliably occur across passive observation or more » when participants made a category-nonspecific, generic motor response. These findings show that incidental learning of categories is strongly mediated by the character of coincident behavior. « less
Authors:
; ; ;
Award ID(s):
1950054 1655126
Publication Date:
NSF-PAR ID:
10275655
Journal Name:
Journal of experimental psychology
ISSN:
0278-7393
Sponsoring Org:
National Science Foundation
More Like this
  1. A wealth of evidence indicates the existence of a consolidation phase, triggered by and following a practice session, wherein new memory traces relevant to task performance are transformed and honed to represent new knowledge. But, the role of consolidation is not well-understood in category learning and has not been studied at all under incidental category learning conditions. Here, we examined the acquisition, consolidation and retention phases in a visuomotor task wherein auditory category information was available, but not required, to guide detection of an above-threshold visual target across one of four spatial locations. We compared two training conditions: (1) Constant, whereby repeated instances of one exemplar from an auditory category preceded a visual target, predicting its upcoming location; (2) Variable, whereby five distinct category exemplars predicted the visual target. Visual detection speed and accuracy, as well as the performance cost of randomizing the association of auditory category to visual target location, were assessed during online performance, again after a 24-hour delay to assess the expression of delayed gains, and after 10 days to assess retention. Results revealed delayed gains associated with incidental auditory category learning and retention effects for both training conditions. Offline processes can be triggered even for incidentalmore »auditory input and lead to category learning; variability of input can enhance the generation of incidental auditory category learning.« less
  2. Feature-based attention is known to enhance visual processing globally across the visual field, even at task-irrelevant locations. Here, we asked whether attention to object categories, in particular faces, shows similar location-independent tuning. Using EEG, we measured the face-selective N170 component of the EEG signal to examine neural responses to faces at task-irrelevant locations while participants attended to faces at another task-relevant location. Across two experiments, we found that visual processing of faces was amplified at task-irrelevant locations when participants attended to faces relative to when participants attended to either buildings or scrambled face parts. The fact that we see this enhancement with the N170 suggests that these attentional effects occur at the earliest stage of face processing. Two additional behavioral experiments showed that it is easier to attend to the same object category across the visual field relative to two distinct categories, consistent with object-based attention spreading globally. Together, these results suggest that attention to high-level object categories shows similar spatially global effects on visual processing as attention to simple, individual, low-level features.
  3. Many goal-directed actions that require rapid visuomotor planning and perceptual decision-making are affected in older adults, causing difficulties in execution of many functional activities of daily living. Visuomotor planning and perceptual identification are mediated by the dorsal and ventral visual streams, respectively, but it is unclear how age-induced changes in sensory processing in these streams contribute to declines in visuomotor decision-making performance. Previously, we showed that in young adults, task demands influenced movement strategies during visuomotor decision-making, reflecting differential integration of sensory information between the two streams. Here, we asked the question if older adults would exhibit deficits in interactions between the two streams during demanding motor tasks. Older adults ( n = 15) and young controls ( n = 26) performed reaching or interception movements toward virtual objects. In some blocks of trials, participants also had to select an appropriate movement goal based on the shape of the object. Our results showed that older adults corrected fewer initial decision errors during both reaching and interception movements. During the interception decision task, older adults made more decision- and execution-related errors than young adults, which were related to early initiation of their movements. Together, these results suggest that older adults havemore »a reduced ability to integrate new perceptual information to guide online action, which may reflect impaired ventral-dorsal stream interactions. NEW & NOTEWORTHY Older adults show declines in vision, decision-making, and motor control, which can lead to functional limitations. We used a rapid visuomotor decision task to examine how these deficits may interact to affect task performance. Compared with healthy young adults, older adults made more errors in both decision-making and motor execution, especially when the task required intercepting moving targets. This suggests that age-related declines in integrating perceptual and motor information may contribute to functional deficits.« less
  4. Abstract

    A training method to improve speech hearing in noise has proven elusive, with most methods failing to transfer to untrained tasks. One common approach to identify potentially viable training paradigms is to make use of cross-sectional designs. For instance, the consistent finding that people who chose to avidly engage with action video games as part of their normal life also show enhanced performance on non-game visual tasks has been used as a foundation to test the causal impact of such game play via true experiments (e.g., in more translational designs). However, little work has examined the association between action video game play and untrained auditory tasks, which would speak to the possible utility of using such games to improve speech hearing in noise. To examine this possibility, 80 participants with mixed action video game experience were tested on a visual reaction time task that has reliably shown superior performance in action video game players (AVGPs) compared to non-players (≤ 5 h/week across game categories) and multi-genre video game players (> 5 h/week across game categories). Auditory cognition and perception were tested using auditory reaction time and two speech-in-noise tasks. Performance of AVGPs on the visual task replicated previous positive findings. However, no significantmore »benefit of action video game play was found on the auditory tasks. We suggest that, while AVGPs interact meaningfully with a rich visual environment during play, they may not interact with the games’ auditory environment. These results suggest that far transfer learning during action video game play is modality-specific and that an acoustically relevant auditory environment may be needed to improve auditory probabilistic thinking.

    « less
  5. Abstract

    Human visual working memory (VWM) is a memory store people use to maintain the visual features of objects and scenes. Although it is obvious that bottom-up information influences VWM, the extent to which top-down conceptual information influences VWM is largely unknown. We report an experiment in which groups of participants were trained in one of two different categories of geologic faults (left/right lateral, or normal/reverse faults), or received no category training. Following training, participants performed a visual change detection task in which category knowledge was irrelevant to the task. Participants were more likely to detect a change in geologic scenes when the changes crossed a trained categorical distinction (e.g., the left/right lateral fault boundary), compared to within-category changes. In addition, participants trained to distinguish left/right lateral faults were more likely to detect changes when the scenes were mirror images along the left/right dimension. Similarly, participants trained to distinguish normal/reverse faults were more likely to detect changes when scenes were mirror images along the normal/reverse dimension. Our results provide direct empirical evidence that conceptual knowledge influences VWM performance for complex visual information. An implication of our results is that cognitive scientists may need to reconceptualize VWM so that it ismore »closer to “conceptual short-term memory”.

    « less