skip to main content


Title: The Specificity of Sound Symbolic Correspondences in Spoken Language
Abstract

Although language has long been regarded as a primarily arbitrary system,sound symbolism, or non‐arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English‐speaking adults heard sound symbolic foreign words for dimensional adjective pairs (big/small, round/pointy, fast/slow, moving/still) and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross‐dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity.

 
more » « less
NSF-PAR ID:
10245682
Author(s) / Creator(s):
 ;  ;  
Publisher / Repository:
Wiley-Blackwell
Date Published:
Journal Name:
Cognitive Science
Volume:
41
Issue:
8
ISSN:
0364-0213
Page Range / eLocation ID:
p. 2191-2220
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Spatial language is often used metaphorically to describe other domains, including time (long sound) and pitch (high sound). How does experience with these metaphors shape the ability to associate space with other domains? Here, we tested 3- to 6-year-old English-speaking children and adults with a cross-domain matching task. We probed cross-domain relations that are expressed in English metaphors for time and pitch (length-time and height-pitch), as well as relations that are unconventional in English but expressed in other languages (size-time and thickness-pitch). Participants were tested with a perceptual matching task, in which they matched between spatial stimuli and sounds of different durations or pitches, and a linguistic matching task, in which they matched between a label denoting a spatial attribute, duration, or pitch, and a picture or sound representing another dimension. Contrary to previous claims that experience with linguistic metaphors is necessary for children to make cross-domain mappings, children performed above chance for both familiar and unfamiliar relations in both tasks, as did adults. Children’s performance was also better when a label was provided for one of the dimensions, but only when making length-time, size-time, and height-pitch mappings (not thickness-pitch mappings). These findings suggest that, although experience with metaphorical language is not necessary to make cross-domain mappings, labels can promote these mappings, both when they have familiar metaphorical uses (e.g., English ‘long’ denotes both length and duration), and when they describe dimensions that share a common ordinal reference frame (e.g., size and duration, but not thickness and pitch). 
    more » « less
  2. The goal of automatic resource bound analysis is to statically infer symbolic bounds on the resource consumption of the evaluation of a program. A longstanding challenge for automatic resource analysis is the inference of bounds that are functions of complex custom data structures. This article builds on type-based automatic amortized resource analysis (AARA) to address this challenge. AARA is based on the potential method of amortized analysis and reduces bound inference to standard type inference with additional linear constraint solving, even when deriving non-linear bounds. A key component of AARA is resource functions that generate the space of possible bounds for values of a given type while enjoying necessary closure properties. Existing work on AARA defined such functions for many data structures such as lists of lists but the question of whether such functions exist for arbitrary data structures remained open. This work answers this questions positively by uniformly constructing resource polynomials for algebraic data structures defined by regular recursive types. These functions are a generalization of all previously proposed polynomial resource functions and can be seen as a general notion of polynomials for values of a given recursive type. A resource type system for FPC, a core language with recursive types, demonstrates how resource polynomials can be integrated with AARA while preserving all benefits of past techniques. The article also proposes the use of new techniques useful for stating the rules of this type system and proving it sound. First, multivariate potential annotations are stated in terms of free semimodules, substantially abstracting details of the presentation of annotations and the proofs of their properties. Second, a logical relation giving semantic meaning to resource types enables a proof of soundness by a single induction on typing derivations. 
    more » « less
  3. Abstract

    In this study, we address the mapping problem for modal words: how do children learn the form‐to‐ meaning mappings for words likecouldorhave toin their input languages? Learning modal words poses considerable challenges as their meanings are conceptually complex and the way these meanings are mapped to grammatical forms and structures is likewise complex and cross‐linguistically variable. Against a backdrop of how cross‐linguistic modal systems can vary, we focus on new work highlighting the developmental roles of the following: (a) syntactic categories of modal words, (b) interrelationships between modal ‘force’ (possibility and necessity) and ‘flavour’ (root and epistemic), (c) semantic representations for modal forms and (d) children's own emerging modal systems, as a whole, which show that the way they map forms to the ‘modal meaning space’ (considering both force and flavour dimensions) diverges from how adults do, even if the same forms are present. Modality provides a rich natural laboratory for exploring the interrelationships between our conceptual world of possibilities, how concepts get packaged by the syntax–semantics of grammatical systems, and how child learners surmount these form‐meaning mapping challenges.

     
    more » « less
  4. The meaning of words in natural language depends crucially on context. However, most neuroimaging studies of word meaning use isolated words and isolated sentences with little context. Because the brain may process natural language differently from how it processes simplified stimuli, there is a pressing need to determine whether prior results on word meaning generalize to natural language. fMRI was used to record human brain activity while four subjects (two female) read words in four conditions that vary in context: narratives, isolated sentences, blocks of semantically similar words, and isolated words. We then compared the signal-to-noise ratio (SNR) of evoked brain responses, and we used a voxelwise encoding modeling approach to compare the representation of semantic information across the four conditions. We find four consistent effects of varying context. First, stimuli with more context evoke brain responses with higher SNR across bilateral visual, temporal, parietal, and prefrontal cortices compared with stimuli with little context. Second, increasing context increases the representation of semantic information across bilateral temporal, parietal, and prefrontal cortices at the group level. In individual subjects, only natural language stimuli consistently evoke widespread representation of semantic information. Third, context affects voxel semantic tuning. Finally, models estimated using stimuli with little context do not generalize well to natural language. These results show that context has large effects on the quality of neuroimaging data and on the representation of meaning in the brain. Thus, neuroimaging studies that use stimuli with little context may not generalize well to the natural regime.

    SIGNIFICANCE STATEMENTContext is an important part of understanding the meaning of natural language, but most neuroimaging studies of meaning use isolated words and isolated sentences with little context. Here, we examined whether the results of neuroimaging studies that use out-of-context stimuli generalize to natural language. We find that increasing context improves the quality of neuro-imaging data and changes where and how semantic information is represented in the brain. These results suggest that findings from studies using out-of-context stimuli may not generalize to natural language used in daily life.

     
    more » « less
  5. Abstract

    Reading entails transforming visual symbols to sound and meaning. This process depends on specialized circuitry in the visual cortex, the visual word form area (VWFA). Recent findings suggest that this text‐selective cortex comprises at least two distinct subregions: the more posterior VWFA‐1 is sensitive to visual features, while the more anterior VWFA‐2 processes higher level language information. Here, we explore whether these two subregions also exhibit different patterns of functional connectivity. To this end, we capitalize on two complementary datasets: Using the Natural Scenes Dataset (NSD), we identify text‐selective responses in high‐quality 7T adult data (N = 8), and investigate functional connectivity patterns of VWFA‐1 and VWFA‐2 at the individual level. We then turn to the Healthy Brain Network (HBN) database to assess whether these patterns replicate in a large developmental sample (N = 224; age 6–20 years), and whether they relate to reading development. In both datasets, we find that VWFA‐1 is primarily correlated with bilateral visual regions. In contrast, VWFA‐2 is more strongly correlated with language regions in the frontal and lateral parietal lobes, particularly the bilateral inferior frontal gyrus. Critically, these patterns do not generalize to adjacent face‐selective regions, suggesting a specific relationship between VWFA‐2 and the frontal language network. No correlations were observed between functional connectivity and reading ability. Together, our findings support the distinction between subregions of the VWFA, and suggest that functional connectivity patterns in the ventral temporal cortex are consistent over a wide range of reading skills.

     
    more » « less