skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Distributed code for semantic relations predicts neural similarity during analogical reasoning
The ability to generate and process semantic relations is central to many aspects of human cognition. Theorists have long debated whether such relations are coarsely coded as links in a semantic network or finely coded as distributed patterns over some core set of abstract relations. The form and content of the conceptual and neural representations of semantic relations are yet to be empirically established. Using sequential presentation of verbal analogies, we compared neural activities in making analogy judgments with predictions derived from alternative computational models of relational dissimilarity to adjudicate among rival accounts of how semantic relations are coded and compared in the brain. We found that a frontoparietal network encodes the three relation types included in the design. A computational model based on semantic relations coded as distributed representations over a pool of abstract relations predicted neural activities for individual relations within the left superior parietal cortex and for second-order comparisons of relations within a broader left-lateralized network.  more » « less
Award ID(s):
1827374
PAR ID:
10231813
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Journal of cognitive neuroscience
Volume:
33
ISSN:
0898-929X
Page Range / eLocation ID:
377-389
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas. 
    more » « less
  2. Abstract We investigated how the human brain integrates experiences of specific events to build general knowledge about typical event structure. We examined an episodic memory area important for temporal relations, anterior-lateral entorhinal cortex, and a semantic memory area important for action concepts, middle temporal gyrus, to understand how and when these areas contribute to these processes. Participants underwent functional magnetic resonance imaging while learning and recalling temporal relations among novel events over two sessions 1 week apart. Across distinct contexts, individual temporal relations among events could either be consistent or inconsistent with each other. Within each context, during the recall phase, we measured associative coding as the difference of multivoxel correlations among related vs unrelated pairs of events. Neural regions that form integrative representations should exhibit stronger associative coding in the consistent than the inconsistent contexts. We found evidence of integrative representations that emerged quickly in anterior-lateral entorhinal cortex (at session 1), and only subsequently in middle temporal gyrus, which showed a significant change across sessions. A complementary pattern of findings was seen with signatures during learning. This suggests that integrative representations are established early in anterior-lateral entorhinal cortex and may be a pathway to the later emergence of semantic knowledge in middle temporal gyrus. 
    more » « less
  3. Human reasoning goes beyond knowledge about individual entities, extending to inferences based on relations between entities. Here we focus on the use of relations in verbal analogical mapping, sketching a general approach based on assessing similarity between patterns of semantic relations between words. This approach combines research in artificial intelligence with work in psychology and cognitive science, with the aim of minimizing hand coding of text inputs for reasoning tasks. The computational framework takes as inputs vector representations of individual word meanings, coupled with semantic representations of the relations between words, and uses these inputs to form semantic-relation networks for individual analogues. Analogical mapping is operationalized as graph matching under cognitive and computational constraints. The approach highlights the central role of semantics in analogical mapping. 
    more » « less
  4. By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning. 
    more » « less
  5. Analogy problems involving multiple ordered relations of the same type create mapping ambiguity, requiring some mechanism for relational integration to achieve mapping accuracy. We address the question of whether the integration of ordered relations depends on their logical form alone, or on semantic representations that differ across relation types. We developed a triplet mapping task that provides a basic paradigm to investigate analogical reasoning with simple relational structures. Experimental results showed that mapping performance differed across orderings based on category, linear order, and causal relations, providing evidence that each transitive relation has its own semantic representation. Hence, human analogical mapping of ordered relations does not depend solely on their formal property of transitivity. Instead, human ability to solve mapping problems by integrating relations relies on the semantics of relation representations. We also compared human performance to the performance of several vector-based computational models of analogy. These models performed above chance but fell short of human performance for some relations, highlighting the need for further model development. 
    more » « less