skip to main content


Title: Analogy Lays the Foundation for Two Crucial Aspects of Symbolic Development: Intention and Correspondence
Abstract

We argue that analogical reasoning, particularly Gentner's (1983, 2010) structure‐mapping theory, provides an integrative theoretical framework through which we can better understand the development of symbol use. Analogical reasoning can contribute both to the understanding of others’ intentions and the establishment of correspondences between symbols and their referents, two crucial components of symbolic understanding. We review relevant research on the development of symbolic representations, intentionality, comparison, and similarity, and demonstrate how structure‐mapping theory can shed light on several ostensibly disparate findings in the literature. Focusing on visual symbols (e.g., scale models, photographs, and maps), we argue that analogy underlies and supports the understanding of both intention and correspondence, which may enter into a reciprocal bootstrapping process that leads children to gain the prodigious human capacity of symbol use.

 
more » « less
NSF-PAR ID:
10028161
Author(s) / Creator(s):
 ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Topics in Cognitive Science
Volume:
9
Issue:
3
ISSN:
1756-8757
Page Range / eLocation ID:
p. 738-757
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Mathematical knowledge is constructed hierarchically from basic understanding of quantities and the symbols that denote them. Discrimination of numerical quantity in both symbolic and non‐symbolic formats has been linked to mathematical problem‐solving abilities. However, little is known of the extent to which overlap in quantity representations between symbolic and non‐symbolic formats is related to individual differences in numerical problem solving and whether this relation changes with different stages of development and skill acquisition. Here we investigate the association between neural representational similarity (NRS) across symbolic and non‐symbolic quantity discrimination and arithmetic problem‐solving skills in early and late developmental stages: elementary school children (ages 7–10 years) and adolescents and young adults (AYA, ages 14–21 years). In children, cross‐format NRS in distributed brain regions, including parietal and frontal cortices and the hippocampus, was positively correlated with arithmetic skills. In contrast, no brain region showed a significant association between cross‐format NRS and arithmetic skills in the AYA group. Our findings suggest that the relationship between symbolic‐non‐symbolic NRS and arithmetic skills depends on developmental stage. Taken together, our study provides evidence for both mapping and estrangement hypotheses in the context of numerical problem solving, albeit over different cognitive developmental stages.

     
    more » « less
  2. null (Ed.)
    Variable binding is a cornerstone of symbolic reasoning and cognition. But how binding can be implemented in connectionist models has puzzled neuroscientists, cognitive psychologists, and neural network researchers for many decades. One type of connectionist model that naturally includes a binding operation is vector symbolic architectures (VSAs). In contrast to other proposals for variable binding, the binding operation in VSAs is dimensionality-preserving, which enables representing complex hierarchical data structures, such as trees, while avoiding a combinatoric expansion of dimensionality. Classical VSAs encode symbols by dense randomized vectors, in which information is distributed throughout the entire neuron population. By contrast, in the brain, features are encoded more locally, by the activity of single neurons or small groups of neurons, often forming sparse vectors of neural activation. Following Laiho et al. (2015), we explore symbolic reasoning with a special case of sparse distributed representations. Using techniques from compressed sensing, we first show that variable binding in classical VSAs is mathematically equivalent to tensor product binding between sparse feature vectors, another well-known binding operation which increases dimensionality. This theoretical result motivates us to study two dimensionality-preserving binding methods that include a reduction of the tensor matrix into a single sparse vector. One binding method for general sparse vectors uses random projections, the other, block-local circular convolution, is defined for sparse vectors with block structure, sparse block-codes. Our experiments reveal that block-local circular convolution binding has ideal properties, whereas random projection based binding also works, but is lossy. We demonstrate in example applications that a VSA with block-local circular convolution and sparse block-codes reaches similar performance as classical VSAs. Finally, we discuss our results in the context of neuroscience and neural networks. 
    more » « less
  3. By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning.

     
    more » « less
  4. Abstract

    Human reasoning is grounded in an ability to identify highly abstract commonalities governing superficially dissimilar visual inputs. Recent efforts to develop algorithms with this capacity have largely focused on approaches that require extensive direct training on visual reasoning tasks, and yield limited generalization to problems with novel content. In contrast, a long tradition of research in cognitive science has focused on elucidating the computational principles underlying human analogical reasoning; however, this work has generally relied on manually constructed representations. Here we present visiPAM (visual Probabilistic Analogical Mapping), a model of visual reasoning that synthesizes these two approaches. VisiPAM employs learned representations derived directly from naturalistic visual inputs, coupled with a similarity-based mapping operation derived from cognitive theories of human reasoning. We show that without any direct training, visiPAM outperforms a state-of-the-art deep learning model on an analogical mapping task. In addition, visiPAM closely matches the pattern of human performance on a novel task involving mapping of 3D objects across disparate categories.

     
    more » « less
  5. Abstract

    Advances in artificial intelligence have raised a basic question about human intelligence: Is human reasoning best emulated by applying task‐specific knowledge acquired from a wealth of prior experience, or is it based on the domain‐general manipulation and comparison of mental representations? We address this question for the case of visual analogical reasoning. Using realistic images of familiar three‐dimensional objects (cars and their parts), we systematically manipulated viewpoints, part relations, and entity properties in visual analogy problems. We compared human performance to that of two recent deep learning models (Siamese Network and Relation Network) that were directly trained to solve these problems and to apply their task‐specific knowledge to analogical reasoning. We also developed a new model using part‐based comparison (PCM) by applying a domain‐general mapping procedure to learned representations of cars and their component parts. Across four‐term analogies (Experiment 1) and open‐ended analogies (Experiment 2), the domain‐general PCM model, but not the task‐specific deep learning models, generated performance similar in key aspects to that of human reasoners. These findings provide evidence that human‐like analogical reasoning is unlikely to be achieved by applying deep learning with big data to a specific type of analogy problem. Rather, humans do (and machines might) achieve analogical reasoning by learning representations that encode structural information useful for multiple tasks, coupled with efficient computation of relational similarity.

     
    more » « less