skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Semantic and visual interference in solving pictorial analogies.
Neuropsychological investigations with frontal patients have revealed selective deficits in selecting the relational answer to pictorial analogy problems when the correct option is embedded among foils that exhibit high semantic or visual similarity. In contrast, normal age-matched controls solve the same problems with near-perfect accuracy regardless of whether high-similarity foils are present (in the absence of speed pressure). Using more sensitive measures, the present study sought to determine whether or not normal young adults are subject to such interference. Experiment 1 used eye-tracking while participants answered multiple-choice 4-term pictorial analogies. Total looking time was longer for semantically similar foils relative to an irrelevant foil. Experiment 2 presented the same problems in a true/false format with emphasis on rapid responding and found that reaction time to correctly reject false analogies was greater (and errors rates higher) for those based on semantically or visually similar foils. These findings demonstrate that healthy young adults are sensitive to both semantic and visual similarity when solving pictorial analogy problems. Results are interpreted in relation to neurocomputational models of relational processing.  more » « less
Award ID(s):
1827374
PAR ID:
10093530
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Proceedings of the Annual Conference of the Cognitive Science Society
ISSN:
1069-7977
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The ability to recognize and make inductive inferences based on relational similarity is fundamental to much of human higher cognition. However, relational similarity is not easily defined or measured, which makes it difficult to determine whether individual differences in cognitive capacity or semantic knowledge impact relational processing. In two experiments, we used a multi-arrangement task (previously applied to individual words or objects) to efficiently assess similarities between word pairs instantiating various abstract relations. Experiment 1 established that the method identifies word pairs expressing the same relation as more similar to each other than to those expressing different relations. Experiment 2 extended these results by showing that relational similarity measured by the multi-arrangement task is sensitive to more subtle distinctions. Word pairs instantiating the same specific subrelation were judged as more similar to each other than to those instantiating different subrelations within the same general relation type. In addition, Experiment 2 found that individual differences in both fluid intelligence and crystalized verbal intelligence correlated with differentiation of relation similarity judgments. 
    more » « less
  2. Fitch, T.; Lamm, C.; Leder, H.; Teßmar-Raible, K. (Ed.)
    Is analogical reasoning a task that must be learned to solve from scratch by applying deep learning models to massive numbers of reasoning problems? Or are analogies solved by computing similarities between structured representations of analogs? We address this question by comparing human performance on visual analogies created using images of familiar three-dimensional objects (cars and their subregions) with the performance of alternative computational models. Human reasoners achieved above-chance accuracy for all problem types, but made more errors in several conditions (e.g., when relevant subregions were occluded). We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) directly trained to solve these analogy problems, as well as to that of a compositional model that assesses relational similarity between part-based representations. The compositional model based on part representations, but not the deep learning models, generated qualitative performance similar to that of human reasoners. 
    more » « less
  3. Computational models of verbal analogy and relational similarity judgments can employ different types of vector representations of word meanings (embeddings) generated by machine-learning algorithms. An important question is whether human-like relational processing depends on explicit representations of relations (i.e., representations separable from those of the concepts being related), or whether implicit relation representations suffice. Earlier machine-learning models produced static embeddings for individual words, identical across all contexts. However, more recent Large Language Models (LLMs), which use transformer architectures applied to much larger training corpora, are able to produce contextualized embeddings that have the potential to capture implicit knowledge of semantic relations. Here we compare multiple models based on different types of embeddings to human data concerning judgments of relational similarity and solutions of verbal analogy problems. For two datasets, a model that learns explicit representations of relations, Bayesian Analogy with Relational Transformations (BART), captured human performance more successfully than either a model using static embeddings (Word2vec) or models using contextualized embeddings created by LLMs (BERT, RoBERTa, and GPT-2). These findings support the proposal that human thinking depends on representations that separate relations from the concepts they relate. 
    more » « less
  4. Human judgments of similarity and difference are sometimes asymmetrical, with the former being more sensitive than the latter to relational overlap, but the theoretical basis for this asymmetry remains unclear. We test an explanation based on the type of information used to make these judgments (relations versus features) and the comparison process itself (similarity versus difference). We propose that asymmetries arise from two aspects of cognitive complexity that impact judgments of similarity and difference: processing relations between entities is more cognitively demanding than processing features of individual entities, and comparisons assessing difference are more cognitively complex than those assessing similarity. In Experiment 1 we tested this hypothesis for both verbal comparisons between word pairs, and visual comparisons between sets of geometric shapes. Participants were asked to select one of two options that was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard; on ambiguous trials, one option was more featurally similar to the standard, whereas the other was more relationally similar. Given the higher cognitive complexity of processing relations and of assessing difference, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was replicated using more complex story stimuli (Experiment 2). We showed that this pattern can be captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. 
    more » « less
  5. Analogy problems involving multiple ordered relations of the same type create mapping ambiguity, requiring some mechanism for relational integration to achieve mapping accuracy. We address the question of whether the integration of ordered relations depends on their logical form alone, or on semantic representations that differ across relation types. We developed a triplet mapping task that provides a basic paradigm to investigate analogical reasoning with simple relational structures. Experimental results showed that mapping performance differed across orderings based on category, linear order, and causal relations, providing evidence that each transitive relation has its own semantic representation. Hence, human analogical mapping of ordered relations does not depend solely on their formal property of transitivity. Instead, human ability to solve mapping problems by integrating relations relies on the semantics of relation representations. We also compared human performance to the performance of several vector-based computational models of analogy. These models performed above chance but fell short of human performance for some relations, highlighting the need for further model development. 
    more » « less