skip to main content


Title: Individual differences in judging similarity between semantic relations
The ability to recognize and make inductive inferences based on relational similarity is fundamental to much of human higher cognition. However, relational similarity is not easily defined or measured, which makes it difficult to determine whether individual differences in cognitive capacity or semantic knowledge impact relational processing. In two experiments, we used a multi-arrangement task (previously applied to individual words or objects) to efficiently assess similarities between word pairs instantiating various abstract relations. Experiment 1 established that the method identifies word pairs expressing the same relation as more similar to each other than to those expressing different relations. Experiment 2 extended these results by showing that relational similarity measured by the multi-arrangement task is sensitive to more subtle distinctions. Word pairs instantiating the same specific subrelation were judged as more similar to each other than to those instantiating different subrelations within the same general relation type. In addition, Experiment 2 found that individual differences in both fluid intelligence and crystalized verbal intelligence correlated with differentiation of relation similarity judgments.  more » « less
Award ID(s):
1827374
NSF-PAR ID:
10093733
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
ISSN:
1069-7977
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning.

     
    more » « less
  2. Computational models of verbal analogy and relational similarity judgments can employ different types of vector representations of word meanings (embeddings) generated by machine-learning algorithms. An important question is whether human-like relational processing depends on explicit representations of relations (i.e., representations separable from those of the concepts being related), or whether implicit relation representations suffice. Earlier machine-learning models produced static embeddings for individual words, identical across all contexts. However, more recent Large Language Models (LLMs), which use transformer architectures applied to much larger training corpora, are able to produce contextualized embeddings that have the potential to capture implicit knowledge of semantic relations. Here we compare multiple models based on different types of embeddings to human data concerning judgments of relational similarity and solutions of verbal analogy problems. For two datasets, a model that learns explicit representations of relations, Bayesian Analogy with Relational Transformations (BART), captured human performance more successfully than either a model using static embeddings (Word2vec) or models using contextualized embeddings created by LLMs (BERT, RoBERTa, and GPT-2). These findings support the proposal that human thinking depends on representations that separate relations from the concepts they relate. 
    more » « less
  3. Abstract

    Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task‐specific knowledge acquired from a wealth of prior experience, or is it based on the domain‐general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar three‐dimensional objects (cars and their parts), we systematically manipulated viewpoints, part relations, and entity properties in visual analogy problems. We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) that were directly trained to solve these problems and to apply their task‐specific knowledge to analogical reasoning. We also developed a new model using part‐based comparison (PCM) by applying a domain‐general mapping procedure to learned representations of cars and their component parts. Across four‐term analogies (Experiment 1) and open‐ended analogies (Experiment 2), the domain‐general PCM model, but not the task‐specific deep learning models, generated performance similar in key aspects to that of human reasoners. These findings provide evidence that human‐like analogical reasoning is unlikely to be achieved by applying deep learning with big data to a specific type of analogy problem. Rather, humans do (and machines might) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks, coupled with efficient computation of relational similarity.

     
    more » « less
  4. Abstract Background

    Cardiac differentiation of human-induced pluripotent stem (hiPS) cells consistently produces a mixed population of cardiomyocytes and non-cardiac cell types, even when using well-characterized protocols. We sought to determine whether different cell types might result from intrinsic differences in hiPS cells prior to the onset of differentiation.

    Results

    By associating individual differentiated cells that share a common hiPS cell precursor, we tested whether expression variability is predetermined from the hiPS cell state. In a single experiment, cells that shared a progenitor were more transcriptionally similar to each other than to other cells in the differentiated population. However, when the same hiPS cells were differentiated in parallel, we did not observe high transcriptional similarity across differentiations. Additionally, we found that substantial cell death occurs during differentiation in a manner that suggested all cells were equally likely to survive or die, suggesting that there is no intrinsic selection bias for cells descended from particular hiPS cell progenitors. We thus wondered how cells grow spatially during differentiation, so we labeled cells by expression of marker genes and found that cells expressing the same marker tended to occur in patches. Our results suggest that cell type determination across multiple cell types, once initiated, is maintained in a cell-autonomous manner for multiple divisions.

    Conclusions

    Altogether, our results show that while substantial heterogeneity exists in the initial hiPS cell population, it is not responsible for the variability observed in differentiated outcomes; instead, factors specifying the various cell types likely act during a window that begins shortly after the seeding of hiPS cells for differentiation.

     
    more » « less
  5. Abstract

    Identifying abstract relations is essential for commonsense reasoning. Research suggests that even young children can infer relations such as “same” and “different,” but often fail to apply these concepts. Might the process of explaining facilitate the recognition and application of relational concepts? Based on prior work suggesting that explanation can be a powerful tool to promote abstract reasoning, we predicted that children would be more likely to discover and use an abstract relational rule when they were prompted to explain observations instantiating that rule, compared to when they received demonstration alone. Five‐ and 6‐year‐olds were given a modified Relational Match to Sample (RMTS) task, with repeated demonstrations of relational (same) matches by an adult. Half of the children were prompted to explain these matches; the other half reported the match they observed. Children who were prompted to explain showed immediate, stable success, while those only asked to report the outcome of the pedagogical demonstration did not. Findings provide evidence that explanation facilitates early abstraction over and above demonstration alone.

     
    more » « less