skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Emergence of relational reasoning
We review recent theoretical and empirical work on the emergence of relational reasoning, drawing connections among the fields of comparative psychology, developmental psychology, cognitive neuroscience, cognitive science, and machine learning. Relational learning appears to involve multiple systems: a suite of Early Systems that are available to human infants and are shared to some extent with nonhuman animals; and a Late System that emerges in humans only, at approximately age three years. The Late System supports reasoning with explicit role-governed relations, and is closely tied to the functions of a frontoparietal network in the human brain. Recent work in cognitive science and machine learning suggests that humans (and perhaps machines) may acquire abstract relations from nonrelational inputs by means of processes that enable re-representation.  more » « less
Award ID(s):
1827374
PAR ID:
10231808
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Current opinion in behavioral sciences
Volume:
37
ISSN:
2352-1554
Page Range / eLocation ID:
118-124
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The goal of this review is to bring together material from cognitive psychology with recent machine vision studies to identify plausible neural mechanisms for visual same-different discrimination and relational understanding. We highlight how developments in the study of artificial neural networks provide computational evidence implicating attention and working memory in the ascertaining of visual relations, including same- different relations. We review some recent attempts to incorporate these mechanisms into flexible models of visual reasoning. Particular attention is given to recent models jointly trained on visual and linguistic information. These recent systems are promising, but they still fall short of the biological standard in several ways, which we outline in a final section. 
    more » « less
  2. A hallmark of human intelligence is the ability to understand and influence other minds. Humans engage in inferential social learning (ISL) by using commonsense psychology to learn from others and help others learn. Recent advances in artificial intelligence (AI) are raising new questions about the feasibility of human–machine interactions that support such powerful modes of social learning. Here, we envision what it means to develop socially intelligent machines that can learn, teach, and communicate in ways that are characteristic of ISL. Rather than machines that simply predict human behaviours or recapitulate superficial aspects of human sociality (e.g. smiling, imitating), we should aim to build machines that can learn from human inputs and generate outputs for humans by proactively considering human values, intentions and beliefs. While such machines can inspire next-generation AI systems that learn more effectively from humans (as learners) and even help humans acquire new knowledge (as teachers), achieving these goals will also require scientific studies of its counterpart: how humans reason about machine minds and behaviours. We close by discussing the need for closer collaborations between the AI/ML and cognitive science communities to advance a science of both natural and artificial intelligence. This article is part of a discussion meeting issue ‘Cognitive artificial intelligence’. 
    more » « less
  3. Human reasoning goes beyond knowledge about individual entities, extending to inferences based on relations between entities. Here we focus on the use of relations in verbal analogical mapping, sketching a general approach based on assessing similarity between patterns of semantic relations between words. This approach combines research in artificial intelligence with work in psychology and cognitive science, with the aim of minimizing hand coding of text inputs for reasoning tasks. The computational framework takes as inputs vector representations of individual word meanings, coupled with semantic representations of the relations between words, and uses these inputs to form semantic-relation networks for individual analogues. Analogical mapping is operationalized as graph matching under cognitive and computational constraints. The approach highlights the central role of semantics in analogical mapping. 
    more » « less
  4. To achieve human-like common sense about everyday life, machine learning systems must understand and reason about the goals, preferences, and actions of other agents in the environment. By the end of their first year of life, human infants intuitively achieve such common sense, and these cognitive achievements lay the foundation for humans' rich and complex understanding of the mental states of others. Can machines achieve generalizable, commonsense reasoning about other agents like human infants? The Baby Intuitions Benchmark (BIB) challenges machines to predict the plausibility of an agent's behavior based on the underlying causes of its actions. Because BIB's content and paradigm are adopted from developmental cognitive science, BIB allows for direct comparison between human and machine performance. Nevertheless, recently proposed, deep-learning-based agency reasoning models fail to show infant-like reasoning, leaving BIB an open challenge. 
    more » « less
  5. null (Ed.)
    Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas. 
    more » « less