skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Nameability predicts subjective and objective measures of visual similarity
Do people perceive shapes to be similar based purely on their physical features? Or is visual similarity influenced by top-down knowledge? In the present studies, we demonstrate that top-down information – in the form of verbal labels that people associate with visual stimuli – predicts visual similarity as measured using subjective (Experiment 1) and objective (Experiment 2) tasks. In Experiment 1, shapes that were previously calibrated to be (putatively) perceptually equidistant were more likely to be grouped together if they shared a name. In Experiment 2, more nameable shapes were easier for participants to discriminate from other images, again controlling for their perceptual distance. We discuss what these results mean for constructing visual stimuli spaces that are perceptually uniform and discuss theoretical implications of the fact that perceptual similarity is sensitive to top-down information such as the ease with which an object can be named.  more » « less
Award ID(s):
1734260
PAR ID:
10213697
Author(s) / Creator(s):
; ;
Editor(s):
Denison, S.; Mack, M.; Xu, Y.; Armstrong, B.C.
Date Published:
Journal Name:
Proceedings of the 42nd Annual Virtual Meeting of the Cognitive Science Society
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract The ability to take contextual information into account is essential for successful speech processing. This study examines individuals with high-functioning autism and those without in terms of how they adjust their perceptual expectation while discriminating speech sounds in different phonological contexts. Listeners were asked to discriminate pairs of sibilant-vowel monosyllables. Typically, discriminability of sibilants increases when the sibilants are embedded in perceptually enhancing contexts (if the appropriate context-specific perceptual adjustment were performed) and decreases in perceptually diminishing contexts. This study found a reduction in the differences in perceptual response across enhancing and diminishing contexts among high-functioning autistic individuals relative to the neurotypical controls. The reduction in perceptual expectation adjustment is consistent with an increase in autonomy in low-level perceptual processing in autism and a reduction in the influence of top-down information from surrounding information. 
    more » « less
  2. Human judgments of similarity and difference are sometimes asymmetrical, with the former being more sensitive than the latter to relational overlap, but the theoretical basis for this asymmetry remains unclear. We test an explanation based on the type of information used to make these judgments (relations versus features) and the comparison process itself (similarity versus difference). We propose that asymmetries arise from two aspects of cognitive complexity that impact judgments of similarity and difference: processing relations between entities is more cognitively demanding than processing features of individual entities, and comparisons assessing difference are more cognitively complex than those assessing similarity. In Experiment 1 we tested this hypothesis for both verbal comparisons between word pairs, and visual comparisons between sets of geometric shapes. Participants were asked to select one of two options that was either more similar to or more different from a standard. On unambiguous trials, one option was unambiguously more similar to the standard; on ambiguous trials, one option was more featurally similar to the standard, whereas the other was more relationally similar. Given the higher cognitive complexity of processing relations and of assessing difference, we predicted that detecting relational difference would be particularly demanding. We found that participants (1) had more difficulty detecting relational difference than they did relational similarity on unambiguous trials, and (2) tended to emphasize relational information more when judging similarity than when judging difference on ambiguous trials. The latter finding was replicated using more complex story stimuli (Experiment 2). We showed that this pattern can be captured by a computational model of comparison that weights relational information more heavily for similarity than for difference judgments. 
    more » « less
  3. Abstract Recent investigations on how people derive meaning from language have focused on task‐dependent shifts between two cognitive systems. The symbolic (amodal) system represents meaning as the statistical relationships between words. The embodied (modal) system represents meaning through neurocognitive simulation of perceptual or sensorimotor systems associated with a word's referent. A primary finding of literature in this field is that the embodied system is only dominant when a task necessitates it, but in certain paradigms, this has only been demonstrated using nouns and adjectives. The purpose of this paper is to study whether similar effects hold with verbs. Experiment 1 evaluated a novel task in which participants rated a selection of verbs on their implied vertical movement. Ratings correlated well with distributional semantic models, establishing convergent validity, though some variance was unexplained by language statistics alone. Experiment 2 replicated previous noun‐based location‐cue congruency experimental paradigms with verbs and showed that the ratings obtained in Experiment 1 predicted reaction times more strongly than language statistics. Experiment 3 modified the location‐cue paradigm by adding movement to create an animated, temporally decoupled, movement‐verb judgment task designed to examine the relative influence of symbolic and embodied processing for verbs. Results were generally consistent with linguistic shortcut hypotheses of symbolic‐embodied integrated language processing; location‐cue congruence elicited processing facilitation in some conditions, and perceptual information accounted for reaction times and accuracy better than language statistics alone. These studies demonstrate novel ways in which embodied and linguistic information can be examined while using verbs as stimuli. 
    more » « less
  4. Abstract Occipital cortices of different sighted people contain analogous maps of visual information (e.g. foveal vs. peripheral). In congenital blindness, “visual” cortices respond to nonvisual stimuli. Do visual cortices of different blind people represent common informational maps? We leverage naturalistic stimuli and inter-subject pattern similarity analysis to address this question. Blindfolded sighted (n = 22) and congenitally blind (n = 22) participants listened to 6 sound clips (5–7 min each): 3 auditory excerpts from movies; a naturalistic spoken narrative; and matched degraded auditory stimuli (Backwards Speech, scrambled sentences), during functional magnetic resonance imaging scanning. We compared the spatial activity patterns evoked by each unique 10-s segment of the different auditory excerpts across blind and sighted people. Segments of meaningful naturalistic stimuli produced distinctive activity patterns in frontotemporal networks that were shared across blind and across sighted individuals. In the blind group only, segment-specific, cross-subject patterns emerged in visual cortex, but only for meaningful naturalistic stimuli and not Backwards Speech. Spatial patterns of activity within visual cortices are sensitive to time-varying information in meaningful naturalistic auditory stimuli in a broadly similar manner across blind individuals. 
    more » « less
  5. Maximum Likelihood Difference Scaling (MLDS) is an efficient method of estimating perceptual representations of suprathreshold physical quantities (Maloney & Yang, 2003), such as luminance contrast. In MLDS, observers can be instructed to judge which of two stimulus pairs are more similar to one another, or which of the two pairs are more different from one another. If the same physical attributes are used for both the similar and dissimilar tasks, the two criteria should produce the same perceptual scales. We estimated perceptual scales for suprathreshold achromatic square patches. Increments and decrements on the mid-gray background were estimated separately. Observers judged which pair of stimuli were more similar in half of the sessions, and more different in the other half sessions. For most observers, the two tasks produced the same perceptual scales: a decelerating curve for increment contrasts and a cubic curve for decremental contrasts (cf. Whittle, 1992). These scales predicted forced-choice contrast discrimination thresholds for both increments and decrements. However, for a subset of observers, the ‘more different’ judgments produced scales that accelerated with contrast for both increments and decrements; these scale shapes do not predict their discrimination thresholds. Our results suggest that, even with these simple stimuli, observers in an MLDS experiment may attend to different aspects of the stimulus depending on the assigned task. 
    more » « less