The dynamics of selective attention necessarily influences the course of early perceptual development. The intersensory redundancy hypothesis proposes that in early development information presented redundantly across two or more senses selectively recruits attention to the amodal properties of an object or event. In contrast, information presented to a single sense enhances attention to modality‐specific properties. The present study assessed the second of these predictions in neonatal bobwhite quail (
- Award ID(s):
- 1749541
- NSF-PAR ID:
- 10416434
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 12
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract Colinus virginianus ), with a focus on the role of task difficulty in directing selective attention. In Experiment 1, we exposed quail chicks to unimodal auditory, nonredundant audiovisual, or redundant audiovisual presentations of a bobwhite maternal call paired with a pulsing light for 10 min/h on the day following hatching. Chicks were subsequently individually tested 24 h later for their unimodal auditory preference between the familiarized maternal call and the same call with pitch altered by two steps. Chicks from all experimental groups preferred the familiarized maternal call over the altered maternal call. In Experiment 2, we repeated the exposure conditions of Experiment 1, but presented a more difficult task by narrowing the pitch range between the two maternal calls during testing. Chicks in the unimodal auditory and nonredundant audiovisual conditions preferred the familiarized call, whereas chicks in the redundant audiovisual exposure group showed no detection of the pitch change. Our results indicate that early discrimination of pitch change is disrupted by intersensory redundancy under difficult but not easy task conditions. These findings, along with findings from human infants, highlight the role of task difficulty in shifting attentional selectivity and underscore the dynamic nature of neonatal attentional salience hierarchies. -
Learning to process speech in a foreign language involves learning new representations for mapping the auditory signal to linguistic structure. Behavioral experiments suggest that even listeners that are highly proficient in a non-native language experience interference from representations of their native language. However, much of the evidence for such interference comes from tasks that may inadvertently increase the salience of native language competitors. Here we tested for neural evidence of proficiency and native language interference in a naturalistic story listening task. We studied electroencephalography responses of 39 native speakers of Dutch (14 male) to an English short story, spoken by a native speaker of either American English or Dutch. We modeled brain responses with multivariate temporal response functions, using acoustic and language models. We found evidence for activation of Dutch language statistics when listening to English, but only when it was spoken with a Dutch accent. This suggests that a naturalistic, monolingual setting decreases the interference from native language representations, whereas an accent in the listener's own native language may increase native language interference, by increasing the salience of the native language and activating native language phonetic and lexical representations. Brain responses suggest that such interference stems from words from the native language competing with the foreign language in a single word recognition system, rather than being activated in a parallel lexicon. We further found that secondary acoustic representations of speech (after 200 ms latency) decreased with increasing proficiency. This may reflect improved acoustic–phonetic models in more proficient listeners.
Significance Statement Behavioral experiments suggest that native language knowledge interferes with foreign language listening, but such effects may be sensitive to task manipulations, as tasks that increase metalinguistic awareness may also increase native language interference. This highlights the need for studying non-native speech processing using naturalistic tasks. We measured neural responses unobtrusively while participants listened for comprehension and characterized the influence of proficiency at multiple levels of representation. We found that salience of the native language, as manipulated through speaker accent, affected activation of native language representations: significant evidence for activation of native language (Dutch) categories was only obtained when the speaker had a Dutch accent, whereas no significant interference was found to a speaker with a native (American) accent. -
Abstract Infant language learning depends on the distribution of co‐occurrences
within language–between words and other words–andbetween language content and events in the world. Yet infant‐directed speech is not limited to words that refer to perceivable objects and actions. Rather, caregivers’ utterances contain a range of syntactic forms and expressions with diverse attentional, regulatory, social, and referential functions. We conducted a distributional analysis of linguistic content types at the utterance level, and demonstrated that a wide range of content types in maternal speech can be distinguished by their distribution in sequences of utterances and by their patterns of co‐occurrence with infants’ actions. We observed free‐play sessions of 38 12‐month‐old infants and their mothers, annotated maternal utterances for 10 content types, and coded infants’ gaze target and object handling. Results show that all content types tended to repeat in consecutive utterances, whereas preferred transitions between different content types reflected sequences from attention‐capturing to directing and then descriptive utterances. Specific content types were associated with infants’ engagement with objects (declaratives, descriptions, object names), with disengagement from objects (talk about attention, infant's name), and with infants’ gaze at the mother (affirmations). We discuss how structured discourse might facilitate language acquisition by making speech input more predictable and/or by providing clues about high‐level form‐function mappings. -
Pragmatics and social meaning: Understanding under-informativeness in native and non-native speakersForeign-accented non-native speakers sometimes face negative biases compared to native speakers. Here we report an advantage in how comprehenders process the speech of non-native compared to native speakers. In a series of four experiments, we find that under-informative sentences are interpreted differently when attributed to non-native compared to native speakers. Specifically, under-informativeness is more likely to be attributed to inability (rather than unwillingness) to say more in non-native as compared to native speakers. This asymmetry has implications for learning: under-informative teachers are more likely to be given a second chance in case they are non-native speakers of the language (presumably because their prior under-informativeness is less likely to be intentional). Our results suggest strong effects of non-native speech on social-pragmatic inferences. Because these effects emerge for written stimuli, they support theories that stress the role of expectations on non-native comprehension, even in the absence of experience with foreign accents. Finally, our data bear on pragmatic theories of how speaker identity affects language comprehension and show how such theories offer an integrated framework for explaining how non-native language can lead to (sometimes unexpected) social meanings.more » « less
-
Abstract Parental responsiveness to infant behaviors is a strong predictor of infants' language and cognitive outcomes. The mechanisms underlying this effect, however, are relatively unknown. We examined the effects of parent speech on infants' visual attention, manual actions, hand‐eye coordination, and dyadic joint attention during parent‐infant free play. We report on two studies that used head‐mounted eye trackers in increasingly naturalistic laboratory environments. In Study 1, 12‐to‐24‐month‐old infants and their parents played on the floor of a seminaturalistic environment with 24 toys. In Study 2, a different sample of dyads played in a home‐like laboratory with 10 toys and no restrictions on their movement. In both studies, we present evidence that responsive parent speech extends the duration of infants' multimodal attention. This social “boost” of parent speech impacts multiple behaviors that have been linked to later outcomes—visual attention, manual actions, hand‐eye coordination, and joint attention. Further, the amount that parents talked during the interaction was negatively related to the effects of parent speech on infant attention. Together, these results provide evidence of a trade‐off between quantity of speech and its effects, suggesting multiple pathways through which parents impact infants' multimodal attention to shape the moment‐by‐moment dynamics of an interaction.