skip to main content


Title: Conceptual knowledge shapes visual working memory for complex visual information
Abstract

Human visual working memory (VWM) is a memory store people use to maintain the visual features of objects and scenes. Although it is obvious that bottom-up information influences VWM, the extent to which top-down conceptual information influences VWM is largely unknown. We report an experiment in which groups of participants were trained in one of two different categories of geologic faults (left/right lateral, or normal/reverse faults), or received no category training. Following training, participants performed a visual change detection task in which category knowledge was irrelevant to the task. Participants were more likely to detect a change in geologic scenes when the changes crossed a trained categorical distinction (e.g., the left/right lateral fault boundary), compared to within-category changes. In addition, participants trained to distinguish left/right lateral faults were more likely to detect changes when the scenes were mirror images along the left/right dimension. Similarly, participants trained to distinguish normal/reverse faults were more likely to detect changes when scenes were mirror images along the normal/reverse dimension. Our results provide direct empirical evidence that conceptual knowledge influences VWM performance for complex visual information. An implication of our results is that cognitive scientists may need to reconceptualize VWM so that it is closer to “conceptual short-term memory”.

 
more » « less
NSF-PAR ID:
10367329
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Nature Publishing Group
Date Published:
Journal Name:
Scientific Reports
Volume:
12
Issue:
1
ISSN:
2045-2322
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Eleni Papageorgiou (Ed.)
    Background

    Research on task performance under visual field loss is often limited due to small and heterogenous samples. Simulations of visual impairments hold the potential to account for many of those challenges. Digitally altered pictures, glasses, and contact lenses with partial occlusions have been used in the past. One of the most promising methods is the use of a gaze-contingent display that occludes parts of the visual field according to the current gaze position. In this study, the gaze-contingent paradigm was implemented in a static driving simulator to simulate visual field loss and to evaluate parallels in the resulting driving and gaze behavior in comparison to patients.

    Methods

    The sample comprised 15 participants without visual impairment. All the subjects performed three drives: with full vision, simulated left-sided homonymous hemianopia, and simulated right-sided homonymous hemianopia, respectively. During each drive, the participants drove through an urban environment where they had to maneuver through intersections by crossing straight ahead, turning left, and turning right.

    Results

    The subjects reported reduced safety and increased workload levels during simulated visual field loss, which was reflected in reduced lane position stability and greater absence of large gaze movements. Initial compensatory strategies could be found concerning a dislocated gaze position and a distorted fixation ratio toward the blind side, which was more pronounced for right-sided visual field loss. During left-sided visual field loss, the participants showed a smaller horizontal range of gaze positions, longer fixation durations, and smaller saccadic amplitudes compared to right-sided homonymous hemianopia and, more distinctively, compared to normal vision.

    Conclusion

    The results largely mirror reports from driving and visual search tasks under simulated and pathological homonymous hemianopia concerning driving and scanning challenges, initially adopted compensatory strategies, and driving safety. This supports the notion that gaze-contingent displays can be a useful addendum to driving simulator research with visual impairments if the results are interpreted considering methodological limitations and inherent differences to the pathological impairment.

     
    more » « less
  2. The function of long-term memory is not just to reminisce about the past, but also to make predictions that help us behave appropriately and efficiently in the future. This predictive function of memory provides a new perspective on the classic question from memory research of why we remember some things but not others. If prediction is a key outcome of memory, then the extent to which an item generates a prediction signifies that this information already exists in memory and need not be encoded. We tested this principle using human intracranial EEG as a time-resolved method to quantify prediction in visual cortex during a statistical learning task and link the strength of these predictions to subsequent episodic memory behavior. Epilepsy patients of both sexes viewed rapid streams of scenes, some of which contained regularities that allowed the category of the next scene to be predicted. We verified that statistical learning occurred using neural frequency tagging and measured category prediction with multivariate pattern analysis. Although neural prediction was robust overall, this was driven entirely by predictive items that were subsequently forgotten. Such interference provides a mechanism by which prediction can regulate memory formation to prioritize encoding of information that could help learn new predictive relationships.

    SIGNIFICANCE STATEMENTWhen faced with a new experience, we are rarely at a loss for what to do. Rather, because many aspects of the world are stable over time, we rely on past experiences to generate expectations that guide behavior. Here we show that these expectations during a new experience come at the expense of memory for that experience. From intracranial recordings of visual cortex, we decoded what humans expected to see next in a series of photographs based on patterns of neural activity. Photographs that generated strong neural expectations were more likely to be forgotten in a later behavioral memory test. Prioritizing the storage of experiences that currently lead to weak expectations could help improve these expectations in future encounters.

     
    more » « less
  3. null (Ed.)
    Abstract A listener's interpretation of a given speech sound can vary probabilistically from moment to moment. Previous experience (i.e., the contexts in which one has encountered an ambiguous sound) can further influence the interpretation of speech, a phenomenon known as perceptual learning for speech. This study used multivoxel pattern analysis to query how neural patterns reflect perceptual learning, leveraging archival fMRI data from a lexically guided perceptual learning study conducted by Myers and Mesite [Myers, E. B., & Mesite, L. M. Neural systems underlying perceptual adjustment to non-standard speech tokens. Journal of Memory and Language, 76, 80–93, 2014]. In that study, participants first heard ambiguous /s/–/∫/ blends in either /s/-biased lexical contexts (epi_ode) or /∫/-biased contexts (refre_ing); subsequently, they performed a phonetic categorization task on tokens from an /asi/–/a∫i/ continuum. In the current work, a classifier was trained to distinguish between phonetic categorization trials in which participants heard unambiguous productions of /s/ and those in which they heard unambiguous productions of /∫/. The classifier was able to generalize this training to ambiguous tokens from the middle of the continuum on the basis of individual participants' trial-by-trial perception. We take these findings as evidence that perceptual learning for speech involves neural recalibration, such that the pattern of activation approximates the perceived category. Exploratory analyses showed that left parietal regions (supramarginal and angular gyri) and right temporal regions (superior, middle, and transverse temporal gyri) were most informative for categorization. Overall, our results inform an understanding of how moment-to-moment variability in speech perception is encoded in the brain. 
    more » « less
  4. Abstract Introduction

    How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.

    Methods

    Participants (N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.

    Results

    Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.

    Conclusions

    Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.

     
    more » « less
  5. Abstract

    Recent studies have examined the effects of conventional transcranial direct current stimulation (tDCS) on working memory (WM) performance, but this method has relatively low spatial precision and generally involves a reference electrode that complicates interpretation. Herein, we report a repeated-measures crossover study of 25 healthy adults who underwent multielectrode tDCS of the left dorsolateral prefrontal cortex (DLPFC), right DLPFC, or sham in 3 separate visits. Shortly after each stimulation session, participants performed a verbal WM (VWM) task during magnetoencephalography, and the resulting data were examined in the time–frequency domain and imaged using a beamformer. We found that after left DLPFC stimulation, participants exhibited stronger responses across a network of left-lateralized cortical areas, including the supramarginal gyrus, prefrontal cortex, inferior frontal gyrus, and cuneus, as well as the right hemispheric homologues of these regions. Importantly, these effects were specific to the alpha-band, which has been previously implicated in VWM processing. Although stimulation condition did not significantly affect performance, stepwise regression revealed a relationship between reaction time and response amplitude in the left precuneus and supramarginal gyrus. These findings suggest that multielectrode tDCS targeting the left DLPFC affects the neural dynamics underlying offline VWM processing, including utilization of a more extensive bilateral cortical network.

     
    more » « less