skip to main content


Title: Shared spatiotemporal category representations in biological and artificial deep neural networks
Visual scene category representations emerge very rapidly, yet the computational transformations that enable such invariant categorizations remain elusive. Deep convolutional neural networks (CNNs) perform visual categorization at near human-level accuracy using a feedforward architecture, providing neuroscientists with the opportunity to assess one successful series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential scene category representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier (0–200 ms) ERP activity was best explained by early CNN layers at all electrodes. Although later activity at most electrode sites corresponded to earlier CNN layers, activity in right occipito-temporal electrodes was best explained by the later, fully-connected layers of the CNN around 225 ms post-stimulus, along with similar patterns in frontal electrodes. Taken together, these results suggest that the emergence of scene category representations develop through a dynamic interplay between early activity over occipital electrodes as well as later activity over temporal and frontal electrodes.  more » « less
Award ID(s):
1736274
NSF-PAR ID:
10066327
Author(s) / Creator(s):
;
Date Published:
Journal Name:
PLOS computational biology
Volume:
14
Issue:
7
ISSN:
1553-7358
Page Range / eLocation ID:
e1006327
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Human scene categorization is characterized by its remarkable speed. While many visual and conceptual features have been linked to this ability, significant correlations exist between feature spaces, impeding our ability to determine their relative contributions to scene categorization. Here, we used a whitening transformation to decorrelate a variety of visual and conceptual features and assess the time course of their unique contributions to scene categorization. Participants (both sexes) viewed 2250 full-color scene images drawn from 30 different scene categories while having their brain activity measured through 256-channel EEG. We examined the variance explained at each electrode and time point of visual event-related potential (vERP) data from nine different whitened encoding models. These ranged from low-level features obtained from filter outputs to high-level conceptual features requiring human annotation. The amount of category information in the vERPs was assessed through multivariate decoding methods. Behavioral similarity measures were obtained in separate crowdsourced experiments. We found that all nine models together contributed 78% of the variance of human scene similarity assessments and were within the noise ceiling of the vERP data. Low-level models explained earlier vERP variability (88 ms after image onset), whereas high-level models explained later variance (169 ms). Critically, only high-level models shared vERP variability with behavior. Together, these results suggest that scene categorization is primarily a high-level process, but reliant on previously extracted low-level features. 
    more » « less
  2. Abstract

    Learning and recognition can be improved by sorting novel items into categories and subcategories. Such hierarchical categorization is easy when it can be performed according to learned rules (e.g., “if car, then automatic or stick shift” or “if boat, then motor or sail”). Here, we present results showing that human participants acquire categorization rules for new visual hierarchies rapidly, and that, as they do, corresponding hierarchical representations of the categorized stimuli emerge in patterns of neural activation in the dorsal striatum and in posterior frontal and parietal cortex. Participants learned to categorize novel visual objects into a hierarchy with superordinate and subordinate levels based on the objects' shape features, without having been told the categorization rules for doing so. On each trial, participants were asked to report the category and subcategory of the object, after which they received feedback about the correctness of their categorization responses. Participants trained over the course of a one‐hour‐long session while their brain activation was measured using functional magnetic resonance imaging. Over the course of training, significant hierarchy learning took place as participants discovered the nested categorization rules, as evidenced by the occurrence of a learning trial, after which performance suddenly increased. This learning was associated with increased representational strength of the newly acquired hierarchical rules in a corticostriatal network including the posterior frontal and parietal cortex and the dorsal striatum. We also found evidence suggesting that reinforcement learning in the dorsal striatum contributed to hierarchical rule learning.

     
    more » « less
  3. Human scene categorization is rapid and robust, but we have little understanding of how individual features contribute to categorization, nor the time scale of their contribution. This issue is compounded by the non- independence of the many candidate features. Here, we used singular value decomposition to orthogonalize 11 different scene descriptors that included both visual and semantic features. Using high-density EEG and regression analyses, we observed that most explained variability was carried by a late layer of a deep convolutional neural network, as well as a model of a scene’s functions given by the American Time Use Survey. Furthermore, features that explained more variance also tended to explain earlier variance. These results extend previous large-scale behavioral results showing the importance of functional features for scene categorization. Furthermore, these results fail to support models of visual perception that are encapsulated from higher-level cognitive attributes. 
    more » « less
  4. Abstract

    Speech processing often occurs amid competing inputs from other modalities, for example, listening to the radio while driving. We examined the extent to which dividing attention between auditory and visual modalities (bimodal divided attention) impacts neural processing of natural continuous speech from acoustic to linguistic levels of representation. We recorded electroencephalographic (EEG) responses when human participants performed a challenging primary visual task, imposing low or high cognitive load while listening to audiobook stories as a secondary task. The two dual-task conditions were contrasted with an auditory single-task condition in which participants attended to stories while ignoring visual stimuli. Behaviorally, the high load dual-task condition was associated with lower speech comprehension accuracy relative to the other two conditions. We fitted multivariate temporal response function encoding models to predict EEG responses from acoustic and linguistic speech features at different representation levels, including auditory spectrograms and information-theoretic models of sublexical-, word-form-, and sentence-level representations. Neural tracking of most acoustic and linguistic features remained unchanged with increasing dual-task load, despite unambiguous behavioral and neural evidence of the high load dual-task condition being more demanding. Compared to the auditory single-task condition, dual-task conditions selectively reduced neural tracking of only some acoustic and linguistic features, mainly at latencies >200 ms, while earlier latencies were surprisingly unaffected. These findings indicate that behavioral effects of bimodal divided attention on continuous speech processing occur not because of impaired early sensory representations but likely at later cognitive processing stages. Crossmodal attention-related mechanisms may not be uniform across different speech processing levels.

     
    more » « less
  5. Abstract

    The human visual cortex is organized in a hierarchical manner. Although previous evidence supporting this hypothesis has been accumulated, specific details regarding the spatiotemporal information flow remain open. Here we present detailed spatiotemporal correlation profiles of neural activity with low‐level and high‐level features derived from an eight‐layer neural network pretrained for object recognition. These correlation profiles indicate an early‐to‐late shift from low‐level features to high‐level features and from low‐level regions to higher‐level regions along the visual hierarchy, consistent with feedforward information flow. Additionally, we computed three sets of features from the low‐ and high‐level features provided by the neural network: object‐category‐relevant low‐level features (the common components between low‐level and high‐level features), low‐level features roughly orthogonal to high‐level features (the residual Layer 1 features), and unique high‐level features that were roughly orthogonal to low‐level features (the residual Layer 7 features). Contrasting the correlation effects of the common components and the residual Layer 1 features, we observed that the early visual cortex (EVC) exhibited a similar amount of correlation with the two feature sets early in time, but in a later time window, the EVC exhibited a higher and longer correlation effect with the common components (i.e., the low‐level object‐category‐relevant features) than with the low‐level residual features—an effect unlikely to arise from purely feedforward information flow. Overall, our results indicate that non‐feedforward processes, for example, top‐down influences from mental representations of categories, may facilitate differentiation between these two types of low‐level features within the EVC.

     
    more » « less