Research at the intersection of computer vision and neuroscience has revealed hierarchical correspondence between layers of deep convolutional neural networks (DCNNs) and cascade of regions along human ventral visual cortex. Recently, studies have uncovered emergence of human interpretable concepts within DCNNs layers trained to identify visual objects and scenes. Here, we asked whether an artificial neural network (with convolutional structure) trained for visual categorization would demonstrate spatial correspondences with human brain regions showing central/peripheral biases. Using representational similarity analysis, we compared activations of convolutional layers of a DCNN trained for object and scene categorization with neural representations in human brain visual regions. Results reveal a brain-like topographical organization in the layers of the DCNN, such that activations of layer-units with central-bias were associated with brain regions with foveal tendencies (e.g. fusiform gyrus), and activations of layer-units with selectivity for image backgrounds were associated with cortical regions showing peripheral preference (e.g. parahippocampal cortex). The emergence of a categorical topographical correspondence between DCNNs and brain regions suggests these models are a good approximation of the perceptual representation generated by biological neural networks.
Mental models provide a cognitive framework allowing for spatially organizing information while reasoning about the world. However, transitive reasoning studies often rely on perception of stimuli that contain visible spatial features, allowing the possibility that associated neural representations are specific to inherently spatial content. Here, we test the hypothesis that neural representations of mental models generated through transitive reasoning rely on a frontoparietal network irrespective of the spatial nature of the stimulus content. Content within three models ranges from expressly visuospatial to abstract. All mental models participants generated were based on inferred relationships never directly observed. Here, using multivariate representational similarity analysis, we show that patterns representative of mental models were revealed in both superior parietal lobule and anterior prefrontal cortex and converged across stimulus types. These results support the conclusion that, independent of content, transitive reasoning using mental models relies on neural mechanisms associated with spatial cognition.
- Award ID(s):
- Publication Date:
- NSF-PAR ID:
- Journal Name:
- Communications Biology
- Nature Publishing Group
- Sponsoring Org:
- National Science Foundation
More Like this
Cognitive neuroscience methods can identify the fMRI-measured neural representation of familiar individual concepts, such as apple, and decompose them into meaningful neural and semantic components. This approach was applied here to determine the neural representations and underlying dimensions of representation of far more abstract physics concepts related to matter and energy, such as fermion and dark matter, in the brains of 10 Carnegie Mellon physics faculty members who thought about the main properties of each of the concepts. One novel dimension coded the measurability vs. immeasurability of a concept. Another novel dimension of representation evoked particularly by post-classical concepts was associated with four types of cognitive processes, each linked to particular brain regions: (1) Reasoning about intangibles, taking into account their separation from direct experience and observability; (2) Assessing consilience with other, firmer knowledge; (3) Causal reasoning about relations that are not apparent or observable; and (4) Knowledge management of a large knowledge organization consisting of a multi-level structure of other concepts. Two other underlying dimensions, previously found in physics students, periodicity, and mathematical formulation, were also present in this faculty sample. The data were analyzed using factor analysis of stably responding voxels, a Gaussian-naïve Bayes machine-learning classification ofmore »
The real world is uncertain, and while ever changing, it constantly presents itself in terms of new sets of behavioral options. To attain the flexibility required to tackle these challenges successfully, most mammalian brains are equipped with certain computational abilities that rely on the prefrontal cortex (PFC). By examining learning in terms of internal models associating stimuli, actions, and outcomes, we argue here that adaptive behavior relies on specific interactions between multiple systems including: (1) selective models learning stimulus–action associations through rewards; (2) predictive models learning stimulus- and/or action–outcome associations through statistical inferences anticipating behavioral outcomes; and (3) contextual models learning external cues associated with latent states of the environment. Critically, the PFC combines these internal models by forming task sets to drive behavior and, moreover, constantly evaluates the reliability of actor task sets in predicting external contingencies to switch between task sets or create new ones. We review different models of adaptive behavior to demonstrate how their components map onto this unifying framework and specific PFC regions. Finally, we discuss how our framework may help to better understand the neural computations and the cognitive architecture of PFC regions guiding adaptive behavior.
Neural efficiency and spatial task difficulty: A road forward to mapping students’ neural engagement in spatial cognitionThe current study examined the neural correlates of spatial rotation in eight engineering undergraduates. Mastering engineering graphics requires students to mentally visualize in 3D and mentally rotate parts when developing 2D drawings. Students’ spatial rotation skills play a significant role in learning and mastering engineering graphics. Traditionally, the assessment of students’ spatial skills involves no measurements of neural activity during student performance of spatial rotation tasks. We used electroencephalography (EEG) to record neural activity while students performed the Revised Purdue Spatial Visualization Test: Visualization of Rotations (Revised PSVT:R). The two main objectives were to 1) determine whether high versus low performers on the Revised PSVT:R show differences in EEG oscillations and 2) identify EEG oscillatory frequency bands sensitive to item difficulty on the Revised PSVT:R. Overall performance on the Revised PSVT:R determined whether participants were considered high or low performers: students scoring 90% or higher were considered high performers (5 students), whereas students scoring under 90% were considered low performers (3 students). Time-frequency analysis of the EEG data quantified power in several oscillatory frequency bands (alpha, beta, theta, gamma, delta) for comparison between low and high performers, as well as between difficulty levels of the spatial rotation problems. Although wemore »
Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity
Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.