Abstract Cross-linguistic interactions are the hallmark of bilingual development. Theoretical perspectives highlight the key role ofcross-linguistic distancesandlanguage structurein literacy development. Despite the strong theoretical assumptions, the impact of such bilingualism factors in heritage-language speakers remains elusive given high variability in children's heritage-language experiences. A longitudinal inquiry of heritage-language learners of structurally distinct languages – Spanish–English and Chinese–English bilinguals (N= 181,Mage= 7.57, measured 1.5 years apart) aimed to fill this gap. Spanish–English bilinguals showed stronger associations between morphological awareness skills across their two languages, across time, likely reflecting cross-linguistic similarities in vocabulary and lexical morphology between Spanish and English. Chinese–English bilinguals, however, showed stronger associations between morphological and word reading skills in English, likely reflecting the critical role of morphology in spoken and written Chinese word structure. The findings inform theories of literacy by uncovering the mechanisms by which bilingualism factors influence child literacy development.
more »
« less
Amodal phonology
Does knowledge of language transfer spontaneously across language modalities? For example, do English speakers, who have had no command of a sign language, spontaneously project grammatical constraints from English to linguistic signs? Here, we address this question by examining the constraints on doubling. We first demonstrate that doubling (e.g., panana, generally, ABB) is amenable to two conflicting parses (identity vs. reduplication), depending on the level of analysis (phonology vs. morphology). We next show that speakers with no command of a sign language spontaneously project these two parses to novel ABB signs in American Sign language. Moreover, the chosen parse (for signs) is constrained by the morphology of spoken language. Hebrew speakers can project the morphological parse when doubling indicates diminution, but English speakers only do so when doubling indicates plurality, in line with the distinct morphological properties of their spoken languages. These observations suggest that doubling in speech and signs is constrained by a common set of linguistic principles that are algebraic, amodal and abstract.
more »
« less
- PAR ID:
- 10192839
- Date Published:
- Journal Name:
- Journal of linguistics
- ISSN:
- 1469-7742
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.more » « less
-
We investigate the roles of linguistic and sensory experience in the early-produced visual, auditory, and abstract words of congenitally-blind toddlers, deaf toddlers, and typicallysighted/ hearing peers. We also assess the role of language access by comparing early word production in children learning English or American Sign Language (ASL) from birth, versus at a delay. Using parental report data on child word production from the MacArthur-Bates Communicative Development Inventory, we found evidence that while children produced words referring to imperceptible referents before age 2, such words were less likely to be produced relative to words with perceptible referents. For instance, blind (vs. sighted) children said fewer highly visual words like “blue” or “see”; deaf signing (vs. hearing) children produced fewer auditory signs like HEAR. Additionally, in spoken English and ASL, children who received delayed language access were less likely to produce words overall. These results demonstrate and begin to quantify how linguistic and sensory access may influence which words young children produce.more » « less
-
Linguistic characteristics of bimodal bilingual code-blending: Evidence from acceptability judgmentsAbstract Code-blending is the simultaneous expression of utterances using both a sign language and a spoken language. We expect that like code-switching, code-blending is linguistically constrained and thus we investigate two hypothesized constraints using an acceptability judgment task. Participants rated the acceptability of code-blended utterances designed to be consistent or inconsistent with these hypothesized constraints. We find strong support for the proposed constraint that each modality of code-blended utterances contributes content to a single proposition. We also find support for the proposed constraint that – at least for American Sign Language (ASL) and English – code-blended utterances make use of a single derivation which is realized using surface forms in the two languages, rather than two simultaneous derivations, one for each language. While this study was limited to ASL/English code-blending and further investigation is needed, we hope that this novel study will encourage future research comparing linguistic constraints on code-blending and code-switching.more » « less
-
Perlman, Marcus (Ed.)Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.more » « less
An official website of the United States government

