Does knowledge of language transfer spontaneously across language modalities? For example, do English speakers, who have had no command of a sign language, spontaneously project grammatical constraints from English to linguistic signs? Here, we address this question by examining the constraints on doubling. We first demonstrate that doubling (e.g., panana, generally, ABB) is amenable to two conflicting parses (identity vs. reduplication), depending on the level of analysis (phonology vs. morphology). We next show that speakers with no command of a sign language spontaneously project these two parses to novel ABB signs in American Sign language. Moreover, the chosen parse (for signs) is constrained by the morphology of spoken language. Hebrew speakers can project the morphological parse when doubling indicates diminution, but English speakers only do so when doubling indicates plurality, in line with the distinct morphological properties of their spoken languages. These observations suggest that doubling in speech and signs is constrained by a common set of linguistic principles that are algebraic, amodal and abstract.
more »
« less
Infants differentially extract rules from language
Abstract Infants readily extract linguistic rules from speech. Here, we ask whether this advantage extends to linguistic stimuli that do not rely on the spoken modality. To address this question, we first examine whether infants can differentially learn rules from linguistic signs. We show that, despite having no previous experience with a sign language, six-month-old infants can extract the reduplicative rule (AA) from dynamic linguistic signs, and the neural response to reduplicative linguistic signs differs from reduplicative visual controls, matched for the dynamic spatiotemporal properties of signs. We next demonstrate that the brain response for reduplicative signs is similar to the response to reduplicative speech stimuli. Rule learning, then, apparently depends on the linguistic status of the stimulus, not its sensory modality. These results suggest that infants are language-ready. They possess a powerful rule system that is differentially engaged by all linguistic stimuli, speech or sign.
more »
« less
- Award ID(s):
- 1733984
- PAR ID:
- 10376396
- Date Published:
- Journal Name:
- Scientific Reports
- Volume:
- 11
- Issue:
- 1
- ISSN:
- 2045-2322
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Abstract To understand human language—both spoken and signed—the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long‐standing debate. We re‐frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality‐independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4–5 Hz range) is causally related to language comprehension in both speech and sign language. This modality‐independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under:Linguistics > Linguistic TheoryLinguistics > Language in Mind and BrainLinguistics > Computational Models of LanguagePsychology > Languagemore » « less
-
Unbounded productivity is a hallmark of linguistic competence. Here, we asked whether this capacity automatically applies to signs. Participants saw video-clips of novel signs in American Sign Language (ASL) produced by a signer whose body appeared in a monochromatic color, and they quickly identified the signs’ color. The critical manipulation compared reduplicative (αα) signs to non-reduplicative (αβ) controls. Past research has shown that reduplication is frequent in ASL, and frequent structures elicit stronger Stroop interference. If signers automatically generalize the reduplication function, then αα signs should elicit stronger color-naming interference. Results showed no effect of reduplication for signs whose base (α) consisted of native ASL features (possibly, due to the similarity of α items to color names). Remarkably, signers were highly sensitive to reduplication when the base (α) included novel features. These results demonstrate that signers can freely extend their linguistic knowledge to novel forms, and they do so automatically. Unbounded productivity thus defines all languages, irrespective of input modality.more » « less
-
Perlman, Marcus (Ed.)Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.more » « less
-
Abstract Previous work has shown that infants as young as 8 months of age can use certain features of the environment, such as the shape or color of visual stimuli, as cues to organize simple inputs into hierarchical rule structures, a robust form of reinforcement learning that supports generalization of prior learning to new contexts. However, especially in cluttered naturalistic environments, there are an abundance of potential cues that can be used to structure learning into hierarchical rule structures. It is unclear how infants determine what features constitute a higher‐order context to organize inputs into hierarchical rule structures. Here, we examine whether 9‐month‐old infants are biased to use social stimuli, relative to non‐social stimuli, as a higher‐order context to organize learning of simple visuospatial inputs into hierarchical rule sets. Infants were presented with four face/color‐target location pairings, which could be learned most simply as individual associations. Alternatively, infants could use the faces or colorful backgrounds as a higher‐order context to organize the inputs into simpler color‐location or face‐location rules, respectively. Infants were then given a generalization test designed to probehowthey learned the initial pairings. The results indicated that infants appeared to use the faces as a higher‐order context to organize simpler color‐location rules, which then supported generalization of learning to new face contexts. These findings provide new evidence that infants are biased to organize reinforcement learning around social stimuli.more » « less
An official website of the United States government

