skip to main content


Title: Exploring spatiotemporal neural dynamics of the human visual cortex
Abstract

The human visual cortex is organized in a hierarchical manner. Although previous evidence supporting this hypothesis has been accumulated, specific details regarding the spatiotemporal information flow remain open. Here we present detailed spatiotemporal correlation profiles of neural activity with low‐level and high‐level features derived from an eight‐layer neural network pretrained for object recognition. These correlation profiles indicate an early‐to‐late shift from low‐level features to high‐level features and from low‐level regions to higher‐level regions along the visual hierarchy, consistent with feedforward information flow. Additionally, we computed three sets of features from the low‐ and high‐level features provided by the neural network: object‐category‐relevant low‐level features (the common components between low‐level and high‐level features), low‐level features roughly orthogonal to high‐level features (the residual Layer 1 features), and unique high‐level features that were roughly orthogonal to low‐level features (the residual Layer 7 features). Contrasting the correlation effects of the common components and the residual Layer 1 features, we observed that the early visual cortex (EVC) exhibited a similar amount of correlation with the two feature sets early in time, but in a later time window, the EVC exhibited a higher and longer correlation effect with the common components (i.e., the low‐level object‐category‐relevant features) than with the low‐level residual features—an effect unlikely to arise from purely feedforward information flow. Overall, our results indicate that non‐feedforward processes, for example, top‐down influences from mental representations of categories, may facilitate differentiation between these two types of low‐level features within the EVC.

 
more » « less
NSF-PAR ID:
10460647
Author(s) / Creator(s):
 ;  ;  ;  
Publisher / Repository:
Wiley Blackwell (John Wiley & Sons)
Date Published:
Journal Name:
Human Brain Mapping
Volume:
40
Issue:
14
ISSN:
1065-9471
Page Range / eLocation ID:
p. 4213-4238
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Introduction

    How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.

    Methods

    Participants (N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.

    Results

    Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.

    Conclusions

    Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.

     
    more » « less
  2. Abstract Objects can be described in terms of low-level (e.g., boundaries) and high-level properties (e.g., object semantics). While recent behavioral findings suggest that the influence of semantic relatedness between objects on attentional allocation can be independent of task-relevance, the underlying neural substrate of semantic influences on attention remains ill-defined. Here, we employ behavioral and functional magnetic resonance imaging measures to uncover the mechanism by which semantic information increases visual processing efficiency. We demonstrate that the strength of the semantic relatedness signal decoded from the left inferior frontal gyrus: 1) influences attention, producing behavioral semantic benefits; 2) biases spatial attention maps in the intraparietal sulcus, subsequently modulating early visual cortex activity; and 3) directly predicts the magnitude of behavioral semantic benefit. Altogether, these results identify a specific mechanism driving task-independent semantic influences on attention. 
    more » « less
  3. Abstract Responses to visually presented objects along the cortical surface of the human brain have a large-scale organization reflecting the broad categorical divisions of animacy and object size. Emerging evidence indicates that this topographical organization is supported by differences between objects in mid-level perceptual features. With regard to the timing of neural responses, images of objects quickly evoke neural responses with decodable information about animacy and object size, but are mid-level features sufficient to evoke these rapid neural responses? Or is slower iterative neural processing required to untangle information about animacy and object size from mid-level features, requiring hundreds of milliseconds more processing time? To answer this question, we used EEG to measure human neural responses to images of objects and their texform counterparts—unrecognizable images that preserve some mid-level feature information about texture and coarse form. We found that texform images evoked neural responses with early decodable information about both animacy and real-world size, as early as responses evoked by original images. Furthermore, successful cross-decoding indicates that both texform and original images evoke information about animacy and size through a common underlying neural basis. Broadly, these results indicate that the visual system contains a mid-level feature bank carrying linearly decodable information on animacy and size, which can be rapidly activated without requiring explicit recognition or protracted temporal processing. 
    more » « less
  4. Abstract Despite extensive studies detecting laminar functional magnetic resonance imaging (fMRI) signals to illustrate the canonical microcircuit, the spatiotemporal characteristics of laminar-specific information flow across cortical regions remain to be fully investigated in both evoked and resting conditions at different brain states. Here, we developed a multislice line-scanning fMRI (MS-LS) method to detect laminar fMRI signals in adjacent cortical regions with high spatial (50 μm) and temporal resolution (100 ms) in anesthetized rats. Across different trials, we detected either laminar-specific positive or negative blood-oxygen-level-dependent (BOLD) responses in the surrounding cortical region adjacent to the most activated cortex under the evoked condition. Specifically, in contrast to typical Layer (L) 4 correlation across different regions due to the thalamocortical projections for trials with positive BOLD, a strong correlation pattern specific in L2/3 was detected for trials with negative BOLD in adjacent regions, which indicated brain state-dependent laminar-fMRI responses based on corticocortical interaction. Also, in resting-state (rs-) fMRI study, robust lag time differences in L2/3, 4, and 5 across multiple cortices represented the low-frequency rs-fMRI signal propagation from caudal to rostral slices. In summary, our study provided a unique laminar fMRI mapping scheme to better characterize trial-specific intra- and inter-laminar functional connectivity in evoked and resting-state MS-LS. 
    more » « less
  5. Abstract

    Previous work has demonstrated similarities and differences between aerial and terrestrial image viewing. Aerial scene categorization, a pivotal visual processing task for gathering geoinformation, heavily depends on rotation-invariant information. Aerial image-centered research has revealed effects of low-level features on performance of various aerial image interpretation tasks. However, there are fewer studies of viewing behavior for aerial scene categorization and of higher-level factors that might influence that categorization. In this paper, experienced subjects’ eye movements were recorded while they were asked to categorize aerial scenes. A typical viewing center bias was observed. Eye movement patterns varied among categories. We explored the relationship of nine image statistics to observers’ eye movements. Results showed that if the images were less homogeneous, and/or if they contained fewer or no salient diagnostic objects, viewing behavior became more exploratory. Higher- and object-level image statistics were predictive at both the image and scene category levels. Scanpaths were generally organized and small differences in scanpath randomness could be roughly captured by critical object saliency. Participants tended to fixate on critical objects. Image statistics included in this study showed rotational invariance. The results supported our hypothesis that the availability of diagnostic objects strongly influences eye movements in this task. In addition, this study provides supporting evidence for Loschky et al.’s (Journal of Vision, 15(6), 11, 2015) speculation that aerial scenes are categorized on the basis of image parts and individual objects. The findings were discussed in relation to theories of scene perception and their implications for automation development.

     
    more » « less