skip to main content


Title: Nameability predicts subjective and objective measures of visual similarity
Do people perceive shapes to be similar based purely on their physical features? Or is visual similarity influenced by top-down knowledge? In the present studies, we demonstrate that top-down information – in the form of verbal labels that people associate with visual stimuli – predicts visual similarity as measured using subjective (Experiment 1) and objective (Experiment 2) tasks. In Experiment 1, shapes that were previously calibrated to be (putatively) perceptually equidistant were more likely to be grouped together if they shared a name. In Experiment 2, more nameable shapes were easier for participants to discriminate from other images, again controlling for their perceptual distance. We discuss what these results mean for constructing visual stimuli spaces that are perceptually uniform and discuss theoretical implications of the fact that perceptual similarity is sensitive to top-down information such as the ease with which an object can be named.  more » « less
Award ID(s):
1734260
NSF-PAR ID:
10213697
Author(s) / Creator(s):
; ;
Editor(s):
Denison, S.; Mack, M.; Xu, Y.; Armstrong, B.C.
Date Published:
Journal Name:
Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    The grouping of sensory stimuli into categories is fundamental to cognition. Previous research in the visual and auditory systems supports a two‐stage processing hierarchy that underlies perceptual categorization: (a) a “bottom‐up” perceptual stage in sensory cortices where neurons show selectivity for stimulus features and (b) a “top‐down” second stage in higher level cortical areas that categorizes the stimulus‐selective input from the first stage. In order to test the hypothesis that the two‐stage model applies to the somatosensory system, 14 human participants were trained to categorize vibrotactile stimuli presented to their right forearm. Then, during an fMRI scan, participants actively categorized the stimuli. Representational similarity analysis revealed stimulus selectivity in areas including the left precentral and postcentral gyri, the supramarginal gyrus, and the posterior middle temporal gyrus. Crucially, we identified a single category‐selective region in the left ventral precentral gyrus. Furthermore, an estimation of directed functional connectivity delivered evidence for robust top‐down connectivity from the second to first stage. These results support the validity of the two‐stage model of perceptual categorization for the somatosensory system, suggesting common computational principles and a unified theory of perceptual categorization across the visual, auditory, and somatosensory systems.

     
    more » « less
  2. To fluidly engage with the world, our brains must simultaneously represent both the scene in front of us and our memory of the immediate surrounding environment (i.e., local visuospatial context). How does the brain's functional architecture enable sensory and mnemonic representations to closely interface while also avoiding sensory-mnemonic interference? Here, we asked this question using first-person, head-mounted virtual reality and fMRI. Using virtual reality, human participants of both sexes learned a set of immersive, real-world visuospatial environments in which we systematically manipulated the extent of visuospatial context associated with a scene image in memory across three learning conditions, spanning from a single FOV to a city street. We used individualized, within-subject fMRI to determine which brain areas support memory of the visuospatial context associated with a scene during recall (Experiment 1) and recognition (Experiment 2). Across the whole brain, activity in three patches of cortex was modulated by the amount of known visuospatial context, each located immediately anterior to one of the three scene perception areas of high-level visual cortex. Individual subject analyses revealed that these anterior patches corresponded to three functionally defined place memory areas, which selectively respond when visually recalling personally familiar places. In addition to showing activity levels that were modulated by the amount of visuospatial context, multivariate analyses showed that these anterior areas represented the identity of the specific environment being recalled. Together, these results suggest a convergence zone for scene perception and memory of the local visuospatial context at the anterior edge of high-level visual cortex.

    SIGNIFICANCE STATEMENTAs we move through the world, the visual scene around us is integrated with our memory of the wider visuospatial context. Here, we sought to understand how the functional architecture of the brain enables coexisting representations of the current visual scene and memory of the surrounding environment. Using a combination of immersive virtual reality and fMRI, we show that memory of visuospatial context outside the current FOV is represented in a distinct set of brain areas immediately anterior and adjacent to the perceptually oriented scene-selective areas of high-level visual cortex. This functional architecture would allow efficient interaction between immediately adjacent mnemonic and perceptual areas while also minimizing interference between mnemonic and perceptual representations.

     
    more » « less
  3. null (Ed.)
    Abstract The ability to take contextual information into account is essential for successful speech processing. This study examines individuals with high-functioning autism and those without in terms of how they adjust their perceptual expectation while discriminating speech sounds in different phonological contexts. Listeners were asked to discriminate pairs of sibilant-vowel monosyllables. Typically, discriminability of sibilants increases when the sibilants are embedded in perceptually enhancing contexts (if the appropriate context-specific perceptual adjustment were performed) and decreases in perceptually diminishing contexts. This study found a reduction in the differences in perceptual response across enhancing and diminishing contexts among high-functioning autistic individuals relative to the neurotypical controls. The reduction in perceptual expectation adjustment is consistent with an increase in autonomy in low-level perceptual processing in autism and a reduction in the influence of top-down information from surrounding information. 
    more » « less
  4. Abstract Research Highlights

    Children and adults conceptually and perceptually categorize speech and song from age 4.

    Listeners use F0 instability, harmonicity, spectral flux, and utterance duration to determine whether vocal stimuli sound like song.

    Acoustic cue weighting changes with age, becoming adult‐like at age 8 for perceptual categorization and at age 12 for conceptual differentiation.

    Young children are still learning to categorize speech and song, which leaves open the possibility that music‐ and language‐specific skills are not so domain‐specific.

     
    more » « less
  5. Abstract

    Human visual working memory (VWM) is a memory store people use to maintain the visual features of objects and scenes. Although it is obvious that bottom-up information influences VWM, the extent to which top-down conceptual information influences VWM is largely unknown. We report an experiment in which groups of participants were trained in one of two different categories of geologic faults (left/right lateral, or normal/reverse faults), or received no category training. Following training, participants performed a visual change detection task in which category knowledge was irrelevant to the task. Participants were more likely to detect a change in geologic scenes when the changes crossed a trained categorical distinction (e.g., the left/right lateral fault boundary), compared to within-category changes. In addition, participants trained to distinguish left/right lateral faults were more likely to detect changes when the scenes were mirror images along the left/right dimension. Similarly, participants trained to distinguish normal/reverse faults were more likely to detect changes when scenes were mirror images along the normal/reverse dimension. Our results provide direct empirical evidence that conceptual knowledge influences VWM performance for complex visual information. An implication of our results is that cognitive scientists may need to reconceptualize VWM so that it is closer to “conceptual short-term memory”.

     
    more » « less