skip to main content


Title: Distributed code for semantic relations predicts neural similarity during analogical reasoning
The ability to generate and process semantic relations is central to many aspects of human cognition. Theorists have long debated whether such relations are coarsely coded as links in a semantic network or finely coded as distributed patterns over some core set of abstract relations. The form and content of the conceptual and neural representations of semantic relations are yet to be empirically established. Using sequential presentation of verbal analogies, we compared neural activities in making analogy judgments with predictions derived from alternative computational models of relational dissimilarity to adjudicate among rival accounts of how semantic relations are coded and compared in the brain. We found that a frontoparietal network encodes the three relation types included in the design. A computational model based on semantic relations coded as distributed representations over a pool of abstract relations predicted neural activities for individual relations within the left superior parietal cortex and for second-order comparisons of relations within a broader left-lateralized network.  more » « less
Award ID(s):
1827374
NSF-PAR ID:
10231813
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of cognitive neuroscience
Volume:
33
ISSN:
0898-929X
Page Range / eLocation ID:
377-389
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Driver maneuver interaction learning (DMIL) refers to the classification task with the goal of identifying different driver-vehicle maneuver interactions (e.g., left/right turns). Existing conventional studies largely focused on the centralized collection of sensor data from the drivers' smartphones (say, inertial measurement units or IMUs, like accelerometer and gyroscope). Such a centralized mechanism might be precluded by data regulatory constraints. Furthermore, how to enable an adaptive and accurate DMIL framework remains challenging due to (i) complexity in heterogeneous driver maneuver patterns, and (ii) impacts of anomalous driver maneuvers due to, for instance, aggressive driving styles and behaviors.

    To overcome the above challenges, we propose AF-DMIL, an Anomaly-aware Federated Driver Maneuver Interaction Learning system. We focus on the real-world IMU sensor datasets (e.g., collected by smartphones) for our pilot case study. In particular, we have designed three heterogeneous representations for AF-DMIL regarding spectral, time series, and statistical features that are derived from the IMU sensor readings. We have designed a novel heterogeneous representation attention network (HetRANet) based on spectral channel attention, temporal sequence attention, and statistical feature learning mechanisms, jointly capturing and identifying the complex patterns within driver maneuver behaviors. Furthermore, we have designed a densely-connected convolutional neural network in HetRANet to enable the complex feature extraction and enhance the computational efficiency of HetRANet. In addition, we have designed within AF-DMIL a novel anomaly-aware federated learning approach for decentralized DMIL in response to anomalous maneuver data. To ease extraction of the maneuver patterns and evaluation of their mutual differences, we have designed an embedding projection network that projects the high-dimensional driver maneuver features into low-dimensional space, and further derives the exemplars that represent the driver maneuver patterns for mutual comparison. Then, AF-DMIL further leverages the mutual differences of the exemplars to identify those that exhibit anomalous patterns and deviate from others, and mitigates their impacts upon the federated DMIL. We have conducted extensive driver data analytics and experimental studies on three real-world datasets (one is harvested on our own) to evaluate the prototype of AF-DMIL, demonstrating AF-DMIL's accuracy and effectiveness compared to the state-of-the-art DMIL baselines (on average by more than 13% improvement in terms of DMIL accuracy), as well as fewer communication rounds (on average 29.20% fewer than existing distributed learning mechanisms).

     
    more » « less
  2. Abstract Introduction

    How do multiple sources of information interact to form mental representations of object categories? It is commonly held that object categories reflect the integration of perceptual features and semantic/knowledge‐based features. To explore the relative contributions of these two sources of information, we used functional magnetic resonance imaging (fMRI) to identify regions involved in the representation object categories with shared visual and/or semantic features.

    Methods

    Participants (N = 20) viewed a series of objects that varied in their degree of visual and semantic overlap in the MRI scanner. We used a blocked adaptation design to identify sensitivity to visual and semantic features in a priori visual processing regions and in a distributed network of object processing regions with an exploratory whole‐brain analysis.

    Results

    Somewhat surprisingly, within higher‐order visual processing regions—specifically lateral occipital cortex (LOC)—we did not obtain any difference in neural adaptation for shared visual versus semantic category membership. More broadly, both visual and semantic information affected a distributed network of independently identified category‐selective regions. Adaptation was seen a whole‐brain network of processing regions in response to visual similarity and semantic similarity; specifically, the angular gyrus (AnG) adapted to visual similarity and the dorsomedial prefrontal cortex (DMPFC) adapted to both visual and semantic similarity.

    Conclusions

    Our findings suggest that perceptual features help organize mental categories throughout the object processing hierarchy. Most notably, visual similarity also influenced adaptation in nonvisual brain regions (i.e., AnG and DMPFC). We conclude that category‐relevant visual features are maintained in higher‐order conceptual representations and visual information plays an important role in both the acquisition and neural representation of conceptual object categories.

     
    more » « less
  3. null (Ed.)
    Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas. 
    more » « less
  4. Key points

    Visual attention involves discrete multispectral oscillatory responses in visual and ‘higher‐order’ prefrontal cortices.

    Prefrontal cortex laterality effects during visual selective attention are poorly characterized.

    High‐definition transcranial direct current stimulation dynamically modulated right‐lateralized fronto‐visual theta oscillations compared to those observed in left fronto‐visual pathways.

    Increased connectivity in right fronto‐visual networks after stimulation of the left dorsolateral prefrontal cortex resulted in faster task performance in the context of distractors.

    Our findings show clear laterality effects in theta oscillatory activity along prefrontal–visual cortical pathways during visual selective attention.

    Abstract

    Studies of visual attention have implicated oscillatory activity in the recognition, protection and temporal organization of attended representations in visual cortices. These studies have also shown that higher‐order regions such as the prefrontal cortex are critical to attentional processing, but far less is understood regarding prefrontal laterality differences in attention processing. To examine this, we selectively applied high‐definition transcranial direct current stimulation (HD‐tDCS) to the left or right dorsolateral prefrontal cortex (DLPFC). We predicted that HD‐tDCS of the leftversusright prefrontal cortex would differentially modulate performance on a visual selective attention task, and alter the underlying oscillatory network dynamics. Our randomized crossover design included 27 healthy adults that underwent three separate sessions of HD‐tDCS (sham, left DLPFC and right DLPFC) for 20 min. Following stimulation, participants completed an attention protocol during magnetoencephalography. The resulting oscillatory dynamics were imaged using beamforming, and peak task‐related neural activity was subjected to dynamic functional connectivity analyses to evaluate the impact of stimulation site (i.e. left and right DLPFC) on neural interactions. Our results indicated that HD‐tDCS over the left DLPFC differentially modulated right fronto‐visual functional connectivity within the theta band compared to HD‐tDCS of the right DLPFC and further, specifically modulated the oscillatory response for detecting targets among an array of distractors. Importantly, these findings provide network‐specific insight into the complex oscillatory mechanisms serving visual selective attention.

     
    more » « less
  5. Abstract

    We investigated how the human brain integrates experiences of specific events to build general knowledge about typical event structure. We examined an episodic memory area important for temporal relations, anterior-lateral entorhinal cortex, and a semantic memory area important for action concepts, middle temporal gyrus, to understand how and when these areas contribute to these processes. Participants underwent functional magnetic resonance imaging while learning and recalling temporal relations among novel events over two sessions 1 week apart. Across distinct contexts, individual temporal relations among events could either be consistent or inconsistent with each other. Within each context, during the recall phase, we measured associative coding as the difference of multivoxel correlations among related vs unrelated pairs of events. Neural regions that form integrative representations should exhibit stronger associative coding in the consistent than the inconsistent contexts. We found evidence of integrative representations that emerged quickly in anterior-lateral entorhinal cortex (at session 1), and only subsequently in middle temporal gyrus, which showed a significant change across sessions. A complementary pattern of findings was seen with signatures during learning. This suggests that integrative representations are established early in anterior-lateral entorhinal cortex and may be a pathway to the later emergence of semantic knowledge in middle temporal gyrus.

     
    more » « less