Relational reasoning is a hallmark of human higher cognition and creativity, yet it is notoriously difficult to encourage in abstract tasks, even in adults. Generally, young children initially focus more on objects, but with age become more focused on relations. While prerequisite knowledge and cognitive resource maturation partially explains this pattern, here we propose a new facet important for children's relational reasoning development: a general orientation to relational information, or a
- Award ID(s):
- 2200781
- PAR ID:
- 10498216
- Publisher / Repository:
- Elsevier
- Date Published:
- Journal Name:
- Cognition
- Volume:
- 243
- Issue:
- C
- ISSN:
- 0010-0277
- Page Range / eLocation ID:
- 105671
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract relational mindset . We demonstrate that a relational mindset can be elicited, even in 4‐year‐old children, yielding greater than expected spontaneous attention to relations. Children either generated or listened to an experimenter state the relationships between objects in a set of formal analogy problems, and then in a second task, selected object or relational matches according to their preference. Children tended to make object mappings, but those who generated relations on the first task selected relational matches more often on the second task, signaling that relational attention is malleable even in young children. -
Abstract Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,
). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesture do allocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech ) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediate the effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderates the impact of visual looking patterns on learning—following along with speech predicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech. -
null (Ed.)The theory that the hippocampus is critical for visual memory and relational cognition has been challenged by discovery of more spared hippocampal tissue than previously reported in H.M., previously unreported extra-hippocampal damage in developmental amnesiacs, and findings that the hippocampus is unnecessary for object-in-context memory in monkeys. These challenges highlight the need for causal tests of hippocampal function in nonhuman primate models. Here, we tested rhesus monkeys on a battery of cognitive tasks including transitive inference, temporal order memory, shape recall, source memory, and image recognition. Contrary to predictions, we observed no robust impairments in memory or relational cognition either within- or between-groups following hippocampal damage. These results caution against over-generalizing from human correlational studies or rodent experimental studies, compel a new generation of nonhuman primate studies, and indicate that we should reassess the relative contributions of the hippocampus proper compared to other regions in visual memory and relational cognition.more » « less
-
null (Ed.)Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas.more » « less
-
Linguistic communication is an intrinsically social activity that enables us to share thoughts across minds. Many complex social uses of language can be captured by domain-general representations of other minds (i.e., mentalistic representations) that externally modulate linguistic meaning through Gricean reasoning. However, here we show that representations of others’ attention are embedded within language itself. Across ten languages, we show that demonstratives—basic grammatical words (e.g., “this”/“that”) which are evolutionarily ancient, learned early in life, and documented in all known languages—are intrinsic attention tools. Beyond their spatial meanings, demonstratives encode both joint attention and the direction in which the listener must turn to establish it. Crucially, the frequency of the spatial and attentional uses of demonstratives varies across languages, suggesting that both spatial and mentalistic representations are part of their conventional meaning. Using computational modeling, we show that mentalistic representations of others’ attention are internally encoded in demonstratives, with their effect further boosted by Gricean reasoning. Yet, speakers are largely unaware of this, incorrectly reporting that they primarily capture spatial representations. Our findings show that representations of other people’s cognitive states (namely, their attention) are embedded in language and suggest that the most basic building blocks of the linguistic system crucially rely on social cognition.