- Award ID(s):
- 1735225
- PAR ID:
- 10177593
- Date Published:
- Journal Name:
- Bilingualism: Language and Cognition
- Volume:
- 23
- Issue:
- 1
- ISSN:
- 1366-7289
- Page Range / eLocation ID:
- 158 to 170
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Spatial language is often used metaphorically to describe other domains, including time (long sound) and pitch (high sound). How does experience with these metaphors shape the ability to associate space with other domains? Here, we tested 3- to 6-year-old English-speaking children and adults with a cross-domain matching task. We probed cross-domain relations that are expressed in English metaphors for time and pitch (length-time and height-pitch), as well as relations that are unconventional in English but expressed in other languages (size-time and thickness-pitch). Participants were tested with a perceptual matching task, in which they matched between spatial stimuli and sounds of different durations or pitches, and a linguistic matching task, in which they matched between a label denoting a spatial attribute, duration, or pitch, and a picture or sound representing another dimension. Contrary to previous claims that experience with linguistic metaphors is necessary for children to make cross-domain mappings, children performed above chance for both familiar and unfamiliar relations in both tasks, as did adults. Children’s performance was also better when a label was provided for one of the dimensions, but only when making length-time, size-time, and height-pitch mappings (not thickness-pitch mappings). These findings suggest that, although experience with metaphorical language is not necessary to make cross-domain mappings, labels can promote these mappings, both when they have familiar metaphorical uses (e.g., English ‘long’ denotes both length and duration), and when they describe dimensions that share a common ordinal reference frame (e.g., size and duration, but not thickness and pitch).more » « less
-
Abstract Statistical learning (SL) is the ability to detect and learn regularities from input and is foundational to language acquisition. Despite the dominant role of SL as a theoretical construct for language development, there is a lack of direct evidence supporting the shared neural substrates underlying language processing and SL. It is also not clear whether the similarities, if any, are related to linguistic processing, or statistical regularities in general. The current study tests whether the brain regions involved in natural language processing are similarly recruited during auditory, linguistic SL. Twenty-two adults performed an auditory linguistic SL task, an auditory nonlinguistic SL task, and a passive story listening task as their neural activation was monitored. Within the language network, the left posterior temporal gyrus showed sensitivity to embedded speech regularities during auditory, linguistic SL, but not auditory, nonlinguistic SL. Using a multivoxel pattern similarity analysis, we uncovered similarities between the neural representation of auditory, linguistic SL, and language processing within the left posterior temporal gyrus. No other brain regions showed similarities between linguistic SL and language comprehension, suggesting that a shared neurocomputational process for auditory SL and natural language processing within the left posterior temporal gyrus is specific to linguistic stimuli.
-
Cross-modal effects provide a model framework for investigating hierarchical inter-areal processing, particularly, under conditions where unimodal cortical areas receive contextual feedback from other modalities. Here, using complementary behavioral and brain imaging techniques, we investigated the functional networks participating in face and voice processing during gender perception, a high-level feature of voice and face perception. Within the framework of a signal detection decision model, Maximum likelihood conjoint measurement (MLCM) was used to estimate the contributions of the face and voice to gender comparisons between pairs of audio-visual stimuli in which the face and voice were independently modulated. Top–down contributions were varied by instructing participants to make judgments based on the gender of either the face, the voice or both modalities ( N = 12 for each task). Estimated face and voice contributions to the judgments of the stimulus pairs were not independent; both contributed to all tasks, but their respective weights varied over a 40-fold range due to top–down influences. Models that best described the modal contributions required the inclusion of two different top–down interactions: (i) an interaction that depended on gender congruence across modalities (i.e., difference between face and voice modalities for each stimulus); (ii) an interaction that depended on the within modalities’ gender magnitude. The significance of these interactions was task dependent. Specifically, gender congruence interaction was significant for the face and voice tasks while the gender magnitude interaction was significant for the face and stimulus tasks. Subsequently, we used the same stimuli and related tasks in a functional magnetic resonance imaging (fMRI) paradigm ( N = 12) to explore the neural correlates of these perceptual processes, analyzed with Dynamic Causal Modeling (DCM) and Bayesian Model Selection. Results revealed changes in effective connectivity between the unimodal Fusiform Face Area (FFA) and Temporal Voice Area (TVA) in a fashion that paralleled the face and voice behavioral interactions observed in the psychophysical data. These findings explore the role in perception of multiple unimodal parallel feedback pathways.more » « less
-
Previous research suggests that learning to use a phonetic property [e.g., voice-onset-time, (VOT)] for talker identity supports a left ear processing advantage. Specifically, listeners trained to identify two “talkers” who only differed in characteristic VOTs showed faster talker identification for stimuli presented to the left ear compared to that presented to the right ear, which is interpreted as evidence of hemispheric lateralization consistent with task demands. Experiment 1 ( n = 97) aimed to replicate this finding and identify predictors of performance; experiment 2 ( n = 79) aimed to replicate this finding under conditions that better facilitate observation of laterality effects. Listeners completed a talker identification task during pretest, training, and posttest phases. Inhibition, category identification, and auditory acuity were also assessed in experiment 1. Listeners learned to use VOT for talker identity, which was positively associated with auditory acuity. Talker identification was not influenced by ear of presentation, and Bayes factors indicated strong support for the null. These results suggest that talker-specific phonetic variation is not sufficient to induce a left ear advantage for talker identification; together with the extant literature, this instead suggests that hemispheric lateralization for talker-specific phonetic variation requires phonetic variation to be conditioned on talker differences in source characteristics.more » « less
-
Holistic processing (HP) of faces refers to the obligatory, simultaneous processing of the parts and their relations, and it emerges over the course of development. HP is manifest in a decrement in the perception of inverted versus upright faces and a reduction in face processing ability when the relations between parts are perturbed. Here, adopting the HP framework for faces, we examined the developmental emergence of HP in another domain for which human adults have expertise, namely, visual word processing. Children, adolescents, and adults performed a lexical decision task and we used two established signatures of HP for faces: the advantage in perception of upright over inverted words and nonwords and the reduced sensitivity to increasing parts (word length). Relative to the other groups, children showed less of an advantage for upright versus inverted trials and lexical decision was more affected by increasing word length. Performance on these HP indices was strongly associated with age and with reading proficiency. Also, the emergence of HP for word perception was not simply a result of improved visual perception over the course of development as no group differences were observed on an object decision task. These results reveal the developmental emergence of HP for orthographic input, and reflect a further instance of experience-dependent tuning of visual perception. These results also add to existing findings on the commonalities of mechanisms of word and face recognition.more » « less