skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: The role of shared labels and shared experiences in representational alignment
Successful communication is thought to require members of a speech community to learn common mappings between words and their referents. But if one person’s concept of CAR is very different from another person’s, successful communication might fail despite the common mappings because different people would mean different things by the same word. Here we investigate the possibility that one source of representational alignment is language itself. We report a series of neural network simulations investigating how representational alignment changes as a function of agents having more or less similar visual experiences (overlap in “visual diet”) and how it changes with exposure to category names. We find that agents with more similar visual experiences have greater representational overlap. However, the presence of category labels not only increases representational overlap, but also greatly reduces the importance of having similar visual experiences. The results suggest that ensuring representational alignment may be one of language’s evolved functions.  more » « less
Award ID(s):
2020969
PAR ID:
10547760
Author(s) / Creator(s):
; ; ;
Editor(s):
Nölle, J; Raviv, L; Graham, E; Hartmann, S; Jadoul, Y; Josserand, M; Matzinger, T; Mudd, K; Pleyer, M; Slonimska, A; Wacewicz, S; Watson, S
Publisher / Repository:
The Evolution of Language: Proceedings of the 15th International Conference (Evolang XV)
Date Published:
Format(s):
Medium: X
Location:
Madison, WI
Sponsoring Org:
National Science Foundation
More Like this
  1. Denison, S.; Mack, M.; Xu, Y.; Armstrong, B.C. (Ed.)
    What affects whether one person represents an item in a similar way to another person? We examined the role of verbal labels in promoting representational alignment. Three groups of participants sorted novel shapes on perceived similarity. Prior to sorting, participants in two of the groups were pre-exposed to the shapes using a simple visual matching task and in one of these groups, shapes were accompanied by one of two novel category labels. Exposure with labels led people to represent the shapes in a more categorical way and to increased alignment between sorters, despite the two categories being visually distinct and participants in both pre-exposure conditions receiving identical visual experience of the shapes. Results hint that labels play a role in aligning people's mental representations, even in the absence of communication 
    more » « less
  2. What determines whether two people represent something in a similar way? We examined the role of verbal labels in promoting representational alignment. Across two experiments, three groups of participants sorted novel shapes from two visually dissimilar categories. Prior to sorting, participants in two of the groups were pre-exposed to the shapes using a simple visual matching task designed to reinforce the visual category structure. In one of these groups, participants additionally heard one of two nonsense category labels accompanying the shapes. Exposure to these redundant labels led people to represent the shapes in a more categorical way, which led to greater alignment between sorters. We found this effect of label-induced alignment despite the two categories being highly visually distinct and despite participants in both pre-exposure conditions receiving identical visual experience with the shapes. Experiment 2 replicated this basic result using more even more stringent testing conditions. The results hint at the possibly extensive role that labels may play in aligning people’s mental representations. 
    more » « less
  3. Visual scene category representations emerge very rapidly, yet the computational transformations that enable such invariant categorizations remain elusive. Deep convolutional neural networks (CNNs) perform visual categorization at near human-level accuracy using a feedforward architecture, providing neuroscientists with the opportunity to assess one successful series of representational transformations that enable categorization in silico. The goal of the current study is to assess the extent to which sequential scene category representations built by a CNN map onto those built in the human brain as assessed by high-density, time-resolved event-related potentials (ERPs). We found correspondence both over time and across the scalp: earlier (0–200 ms) ERP activity was best explained by early CNN layers at all electrodes. Although later activity at most electrode sites corresponded to earlier CNN layers, activity in right occipito-temporal electrodes was best explained by the later, fully-connected layers of the CNN around 225 ms post-stimulus, along with similar patterns in frontal electrodes. Taken together, these results suggest that the emergence of scene category representations develop through a dynamic interplay between early activity over occipital electrodes as well as later activity over temporal and frontal electrodes. 
    more » « less
  4. A fundamental principle of neural representation is to minimize wiring length by spatially organizing neurons according to the frequency of their communication [Sterling and Laughlin, 2015]. A consequence is that nearby regions of the brain tend to represent similar content. This has been explored in the context of the visual cortex in recent works [Doshi and Konkle, 2023, Tong et al., 2023]. Here, we use the notion of cortical distance as a baseline to ground, evaluate, and interpret measures of representational distance. We compare several popular methods—both second-order methods (Representational Similarity Analysis, Centered Kernel Alignment) and first-order methods (Shape Metrics)—and calculate how well the representational distance reflects 2D anatomical distance along the visual cortex (the anatomical stress score). We evaluate these metrics on a large-scale fMRI dataset of human ventral visual cortex [Allen et al., 2022b], and observe that the 3 types of Shape Metrics produce representational-anatomical stress scores with the smallest variance across subjects, (Z score = -1.5), which suggests that first-order representational scores quantify the relationship between representational and cortical geometry in a way that is more invariant across different subjects. Our work establishes a criterion with which to compare methods for quantifying representational similarity with implications for studying the anatomical organization of high-level ventral visual cortex. 
    more » « less
  5. Do people have dispositions towards visual or verbal thinking styles, i.e., a tendency towards one default representational modality versus the other? The problem in trying to answer this question is that visual/verbal thinking styles are challenging to measure. Subjective, introspective measures are the most common but often show poor reliability and validity; neuroimaging studies can provide objective evidence but are intrusive and resource-intensive. In previous work, we observed that in order for a purely behavioral testing method to be able to objectively evaluate a person’s visual/verbal thinking style, 1) the task must be solvable equally well using either visual or verbal mental representations, and 2) it must offer a secondary behavioral marker, in addition to primary performance measures, that indicates which modality is being used. We collected four such tasks from the psychology literature and conducted a small pilot study with adult participants to see the extent to which visual/verbal thinking styles can be differentiated using an individual’s results on these tasks. 
    more » « less