skip to main content


Title: Relational integration in the human brain: A review and synthesis
Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas.  more » « less
Award ID(s):
1827374
NSF-PAR ID:
10231814
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Journal of cognitive neuroscience
Volume:
33
ISSN:
0898-929X
Page Range / eLocation ID:
341-356
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Analogy problems involving multiple ordered relations of the same type create mapping ambiguity, requiring some mechanism for relational integration to achieve mapping accuracy. We address the question of whether the integration of ordered relations depends on their logical form alone, or on semantic representations that differ across relation types. We developed a triplet mapping task that provides a basic paradigm to investigate analogical reasoning with simple relational structures. Experimental results showed that mapping performance differed across orderings based on category, linear order, and causal relations, providing evidence that each transitive relation has its own semantic representation. Hence, human analogical mapping of ordered relations does not depend solely on their formal property of transitivity. Instead, human ability to solve mapping problems by integrating relations relies on the semantics of relation representations. We also compared human performance to the performance of several vector-based computational models of analogy. These models performed above chance but fell short of human performance for some relations, highlighting the need for further model development. 
    more » « less
  2. Animals flexibly select actions that maximize future rewards despite facing uncertainty in sen- sory inputs, action-outcome associations or contexts. The computational and circuit mechanisms underlying this ability are poorly understood. A clue to such computations can be found in the neural systems involved in representing sensory features, sensorimotor-outcome associations and contexts. Specifically, the basal ganglia (BG) have been implicated in forming sensorimotor-outcome association [1] while the thalamocortical loop between the prefrontal cortex (PFC) and mediodorsal thalamus (MD) has been shown to engage in contextual representations [2, 3]. Interestingly, both human and non-human animal experiments indicate that the MD represents different forms of uncertainty [3, 4]. However, finding evidence for uncertainty representation gives little insight into how it is utilized to drive behavior. Normative theories have excelled at providing such computational insights. For example, de- ploying traditional machine learning algorithms to fit human decision-making behavior has clarified how associative uncertainty alters exploratory behavior [5, 6]. However, despite their computa- tional insight and ability to fit behaviors, normative models cannot be directly related to neural mechanisms. Therefore, a critical gap exists between what we know about the neural representa- tion of uncertainty on one end and the computational functions uncertainty serves in cognition. This gap can be filled with mechanistic neural models that can approximate normative models as well as generate experimentally observed neural representations. In this work, we build a mechanistic cortico-thalamo-BG loop network model that directly fills this gap. The model includes computationally-relevant mechanistic details of both BG and thalamocortical circuits such as distributional activities of dopamine [7] and thalamocortical pro- jection modulating cortical effective connectivity [3] and plasticity [8] via interneurons. We show that our network can more efficiently and flexibly explore various environments compared to com- monly used machine learning algorithms and we show that the mechanistic features we include are crucial for handling different types of uncertainty in decision-making. Furthermore, through derivation and mathematical proofs, we approximate our models to two novel normative theories. We show mathematically the first has near-optimal performance on bandit tasks. The second is a generalization on the well-known CUMSUM algorithm, which is known to be optimal on single change point detection tasks [9]. Our normative model expands on this by detecting multiple sequential contextual changes. To our knowledge, our work is the first to link computational in- sights, normative models and neural realization together in decision-making under various forms of uncertainty. 
    more » « less
  3. By middle childhood, humans are able to learn abstract semantic relations (e.g., antonym, synonym, category membership) and use them to reason by analogy. A deep theoretical challenge is to show how such abstract relations can arise from nonrelational inputs, thereby providing key elements of a protosymbolic representation system. We have developed a computational model that exploits the potential synergy between deep learning from “big data” (to create semantic features for individual words) and supervised learning from “small data” (to create representations of semantic relations between words). Given as inputs labeled pairs of lexical representations extracted by deep learning, the model creates augmented representations by remapping features according to the rank of differences between values for the two words in each pair. These augmented representations aid in coping with the feature alignment problem (e.g., matching those features that make “love-hate” an antonym with the different features that make “rich-poor” an antonym). The model extracts weight distributions that are used to estimate the probabilities that new word pairs instantiate each relation, capturing the pattern of human typicality judgments for a broad range of abstract semantic relations. A measure of relational similarity can be derived and used to solve simple verbal analogies with human-level accuracy. Because each acquired relation has a modular representation, basic symbolic operations are enabled (notably, the converse of any learned relation can be formed without additional training). Abstract semantic relations can be induced by bootstrapping from nonrelational inputs, thereby enabling relational generalization and analogical reasoning.

     
    more » « less
  4. Many computational models of reasoning rely on explicit relation representations to account for human cognitive capacities such as analogical reasoning. Relational luring, a phenomenon observed in recognition memory, has been interpreted as evidence that explicit relation representations also impact episodic memory; however, this assumption has not been rigorously assessed by computational modeling. We implemented an established model of recognition memory, the Generalized Context Model (GCM), as a framework for simulating human performance on an old/new recognition task that elicits relational luring. Within this basic theoretical framework, we compared representations based on explicit relations, lexical semantics (i.e., individual word meanings), and a combination of the two. We compared the same alternative representations as predictors of accuracy in solving explicit verbal analogies. In accord with previous work, we found that explicit relation representations are necessary for modeling analogical reasoning. In contrast, preliminary simulations incorporating model parameters optimized to fit human data reproduce relational luring using any of the alternative representations, including one based on non-relational lexical semantics. Further work on model comparisons is needed to examine the contributions of lexical semantics and relations on the luring effect in recognition memory. 
    more » « less
  5. Abstract

    Reasoning, our ability to solve novel problems, has been shown to improve as a result of learning experiences. However, the underlying mechanisms of change in this high-level cognitive ability are unclear. We hypothesized that possible mechanisms include improvements in the encoding, maintenance, and/or integration of relations among mental representations – i.e., relational thinking. Here, we developed several eye gaze metrics to pinpoint learning mechanisms that underpin improved reasoning performance. We collected behavioral and eyetracking data from young adults who participated in a Law School Admission Test preparation course involving word-based reasoning problems or reading comprehension. The Reasoning group improved more than the Comprehension group on a composite measure of four visuospatial reasoning assessments. Both groups improved similarly on an eyetracking paradigm involving transitive inference problems, exhibiting faster response times while maintaining high accuracy levels; nevertheless, the Reasoning group exhibited a larger change than the Comprehension group on an ocular metric of relational thinking. Across the full sample, individual differences in response time reductions were associated with increased efficiency of relational thinking. Accounting for changes in visual search and a more specific measure of relational integration improved the prediction accuracy of the model, but changes in these two processes alone did not adequately explain behavioral improvements. These findings provide evidence of transfer of learning across different kinds of reasoning problems after completing a brief but intensive course. More broadly, the high temporal precision and rich derivable parameters of eyetracking make it a powerful approach for probing learning mechanisms.

     
    more » « less