skip to main content


This content will become publicly available on February 1, 2025

Title: Language systematizes attention: How relational language enhances relational representation by guiding attention
Language can affect cognition, but through what mechanism? Substantial past research has focused on how labeling can elicit categorical representation during online processing. We focus here on a particularly powerful type of language—relational language—and show that relational language can enhance relational representation in children through an embodied attention mechanism. Four-year-old children were given a color-location conjunction task, in which they were asked to encode a two-color square, split either vertically or horizontally (e.g., red on the left, blue on the right), and later recall the same configuration from its mirror reflection. During the encoding phase, children in the experimental condition heard relational language (e.g., “Red is on the left of blue”), while those in the control condition heard generic non-relational language (e.g., “Look at this one, look at it closely”). At recall, children in the experimental condition were more successful at choosing the correct relational representation between the two colors compared to the control group. Moreover, they exhibited different attention patterns as predicted by the attention shift account of relational representation (Franconeri et al., 2012). To test the sustained effect of language and the role of attention, during the second half of the study, the experimental condition was given generic non-relational language. There was a sustained advantage in the experimental condition for both behavioral accuracies and signature attention patterns. Overall, our findings suggest that relational language enhances relational representation by guiding learners’ attention, and this facilitative effect persists over time even in the absence of language. Implications for the mechanism of how relational language can enhance the learning of relational systems (e.g., mathematics, spatial cognition) by guiding attention will be discussed.  more » « less
Award ID(s):
2200781
NSF-PAR ID:
10498216
Author(s) / Creator(s):
; ; ;
Publisher / Repository:
Elsevier
Date Published:
Journal Name:
Cognition
Volume:
243
Issue:
C
ISSN:
0010-0277
Page Range / eLocation ID:
105671
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    Relational reasoning is a hallmark of human higher cognition and creativity, yet it is notoriously difficult to encourage in abstract tasks, even in adults. Generally, young children initially focus more on objects, but with age become more focused on relations. While prerequisite knowledge and cognitive resource maturation partially explains this pattern, here we propose a new facet important for children's relational reasoning development: a general orientation to relational information, or arelational mindset. We demonstrate that a relational mindset can be elicited, even in 4‐year‐old children, yielding greater than expected spontaneous attention to relations. Children either generated or listened to an experimenter state the relationships between objects in a set of formal analogy problems, and then in a second task, selected object or relational matches according to their preference. Children tended to make object mappings, but those who generated relations on the first task selected relational matches more often on the second task, signaling that relational attention is malleable even in young children.

     
    more » « less
  2. Abstract

    Teaching a new concept through gestures—hand movements that accompany speech—facilitates learning above‐and‐beyond instruction through speech alone (e.g., Singer & Goldin‐Meadow,). However, the mechanisms underlying this phenomenon are still under investigation. Here, we use eye tracking to explore one often proposed mechanism—gesture's ability to direct visual attention. Behaviorally, we replicate previous findings: Children perform significantly better on a posttest after learning through Speech+Gesture instruction than through Speech Alone instruction. Using eye tracking measures, we show that children who watch a math lesson with gesturedoallocate their visual attention differently from children who watch a math lesson without gesture—they look more to the problem being explained, less to the instructor, and are more likely to synchronize their visual attention with information presented in the instructor's speech (i.e.,follow along with speech) than children who watch the no‐gesture lesson. The striking finding is that, even though these looking patterns positively predict learning outcomes, the patterns do notmediatethe effects of training condition (Speech Alone vs. Speech+Gesture) on posttest success. We find instead a complex relation between gesture and visual attention in which gesturemoderatesthe impact of visual looking patterns on learning—following along with speechpredicts learning for children in the Speech+Gesture condition, but not for children in the Speech Alone condition. Gesture's beneficial effects on learning thus come not merely from its ability to guide visual attention, but also from its ability to synchronize with speech and affect what learners glean from that speech.

     
    more » « less
  3. null (Ed.)
    The theory that the hippocampus is critical for visual memory and relational cognition has been challenged by discovery of more spared hippocampal tissue than previously reported in H.M., previously unreported extra-hippocampal damage in developmental amnesiacs, and findings that the hippocampus is unnecessary for object-in-context memory in monkeys. These challenges highlight the need for causal tests of hippocampal function in nonhuman primate models. Here, we tested rhesus monkeys on a battery of cognitive tasks including transitive inference, temporal order memory, shape recall, source memory, and image recognition. Contrary to predictions, we observed no robust impairments in memory or relational cognition either within- or between-groups following hippocampal damage. These results caution against over-generalizing from human correlational studies or rodent experimental studies, compel a new generation of nonhuman primate studies, and indicate that we should reassess the relative contributions of the hippocampus proper compared to other regions in visual memory and relational cognition. 
    more » « less
  4. null (Ed.)
    Relational integration is required when multiple explicit representations of relations between entities must be jointly considered to make inferences. We provide an overview of the neural substrate of relational integration in humans and the processes that support it, focusing on work on analogical and deductive reasoning. In addition to neural evidence, we consider behavioral and computational work that has informed neural investigations of the representations of individual relations and of relational integration. In very general terms, evidence from neuroimaging, neuropsychological, and neuromodulatory studies points to a small set of regions (generally left lateralized) that appear to constitute key substrates for component processes of relational integration. These include posterior parietal cortex, implicated in the representation of first-order relations (e.g., A:B); rostrolateral pFC, apparently central in integrating first-order relations so as to generate and/or evaluate higher-order relations (e.g., A:B::C:D); dorsolateral pFC, involved in maintaining relations in working memory; and ventrolateral pFC, implicated in interference control (e.g., inhibiting salient information that competes with relevant relations). Recent work has begun to link computational models of relational representation and reasoning with patterns of neural activity within these brain areas. 
    more » « less
  5. Purpose This study examined whether 2-year-olds are better able to acquire novel verb meanings when they appear in varying linguistic contexts, including both content nouns and pronouns, as compared to when the contexts are consistent, including only content nouns. Additionally, differences between typically developing toddlers and late talkers were explored. Method Forty-seven English-acquiring 2-year-olds ( n = 14 late talkers, n = 33 typically developing) saw scenes of actors manipulating objects. These actions were labeled with novel verbs. In the varied condition, children heard sentences containing both content nouns and pronouns (e.g., “The girl is ziffing the truck. She is ziffing it!”). In the consistent condition, children heard the verb an equal number of times, but only with content nouns (e.g., “The girl is ziffing the truck. The girl is ziffing the truck!”). At test, children were shown two new scenes and were asked to find the novel verb's referent. Children's eye gaze was analyzed as a measure of learning. Results Mixed-effects regression analyses revealed that children looked more toward the correct scene in the consistent condition than the varied condition. This difference was more pronounced for late talkers than for typically developing children. Conclusion To acquire an initial representation of a new verb's meaning, children, particularly late talkers, benefit more from hearing the verb in consistent linguistic contexts than in varying contexts. 
    more » « less