Unbounded productivity is a hallmark of linguistic competence. Here, we asked whether this capacity automatically applies to signs. Participants saw video-clips of novel signs in American Sign Language (ASL) produced by a signer whose body appeared in a monochromatic color, and they quickly identified the signs’ color. The critical manipulation compared reduplicative (αα) signs to non-reduplicative (αβ) controls. Past research has shown that reduplication is frequent in ASL, and frequent structures elicit stronger Stroop interference. If signers automatically generalize the reduplication function, then αα signs should elicit stronger color-naming interference. Results showed no effect of reduplication for signs whose base (α) consisted of native ASL features (possibly, due to the similarity of α items to color names). Remarkably, signers were highly sensitive to reduplication when the base (α) included novel features. These results demonstrate that signers can freely extend their linguistic knowledge to novel forms, and they do so automatically. Unbounded productivity thus defines all languages, irrespective of input modality.
more »
« less
Visual form of ASL verb signs predicts non-signer judgment of transitivity
Longstanding cross-linguistic work on event representations in spoken languages have argued for a robust mapping between an event’s underlying representation and its syntactic encoding, such that–for example–the agent of an event is most frequently mapped to subject position. In the same vein, sign languages have long been claimed to construct signs that visually represent their meaning, i.e., signs that are iconic. Experimental research on linguistic parameters such as plurality and aspect has recently shown some of them to be visually universal in sign, i.e. recognized by non-signers as well as signers, and have identified specific visual cues that achieve this mapping. However, little is known about what makes action representations in sign language iconic, or whether and how the mapping of underlying event representations to syntactic encoding is visually apparent in the form of a verb sign. To this end, we asked what visual cues non-signers may use in evaluating transitivity (i.e., the number of entities involved in an action). To do this, we correlated non-signer judgments about transitivity of verb signs from American Sign Language (ASL) with phonological characteristics of these signs. We found that non-signers did not accurately guess the transitivity of the signs, but that non-signer transitivity judgments can nevertheless be predicted from the signs’ visual characteristics. Further, non-signers cue in on just those features that code event representations across sign languages, despite interpreting them differently. This suggests the existence of visual biases that underlie detection of linguistic categories, such as transitivity, which may uncouple from underlying conceptual representations over time in mature sign languages due to lexicalization processes.
more »
« less
- Award ID(s):
- 1734938
- PAR ID:
- 10340781
- Editor(s):
- Perlman, Marcus
- Date Published:
- Journal Name:
- PLOS ONE
- Volume:
- 17
- Issue:
- 2
- ISSN:
- 1932-6203
- Page Range / eLocation ID:
- e0262098
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
In this paper, I explore metalinguistic discourse in Zinacantec Family Homesign (‘Z sign’), an emergent sign language developed by three deaf siblings and their hearing family members. In particular, I examine how metalinguistic discourse unfolds between a hearing Z signer and various members of her family—including her deaf siblings, her elderly hearing father, and her young hearing son. I do so via a close examination of several snippets of conversation in which the Z signers talk about the “right” way to sign, paying close attention to how they mobilize various semiotic devices, including manual signs, eye gaze, facial expressions, and speech. I aim to understand not only the formal components of metalinguistic discourse in Z sign but also how it functions as a form of social action in this small linguistic community. How do members of this family position themselves and others as (in)competent, (non-)authoritative signers in light of existing social divisions among them? How do they reinforce or challenge those social divisions through metalinguistic discourse? How might metalinguistic discourse contribute to the propagation of emergent linguistic norms throughout the family? I find that a recurrent device for enacting metalinguistic critique among the Z signers is the partial re-production and transformation of others’ utterances and other visible actions, manifested in a way that exploits the availability of multiple, semi-independent manual and non-manual articulators in the visual modality.more » « less
-
The research community generally accepts that signed and spoken languages contain both iconicity and arbitrariness. Iconicity's impact on statistical distributions of motivated forms throughout signed language lexicons is clear (e.g. Occhino, 2016). However, there has been little work to determine whether the iconic links between form and meaning are relevant only to a sign's initial formation, or if these links are stored as part of lexical representations. In the present study, 40 Deaf signers of American Sign Language were asked to rate two-handed signs in their citation form and in one-handed (reduced) forms. Twelve signs were highly iconic. For each of these highly iconic sign, a less iconic but phonologically similar sign of the same grammatical category was also chosen. Signs were presented in carrier sentences and in isolation. Participants preferred one-handed forms of the highly iconic signs over one-handed forms of their phonolgogically similar but less iconic counterparts. Thus, iconicity impacted the application of a synchronic phonological process. This finding suggests that lexical representations retain iconic form-meaning links and that these links are accessible to the phonological grammar.more » « less
-
null (Ed.)Abstract ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.more » « less
-
Since American Sign Language (ASL) has no standard written form, Deaf signers frequently share videos in order to communicate in their native language. However, since both hands and face convey critical linguistic information in signed languages, sign language videos cannot preserve signer privacy. While signers have expressed interest, for a variety of applications, in sign language video anonymization that would effectively preserve linguistic content, attempts to develop such technology have had limited success, given the complexity of hand movements and facial expressions. Existing approaches rely predominantly on precise pose estimations of the signer in video footage and often require sign language video datasets for training. These requirements prevent them from processing videos 'in the wild,' in part because of the limited diversity present in current sign language video datasets. To address these limitations, our research introduces DiffSLVA, a novel methodology that utilizes pre-trained large-scale diffusion models for zero-shot text-guided sign language video anonymization. We incorporate ControlNet, which leverages low-level image features such as HED (Holistically-Nested Edge Detection) edges, to circumvent the need for pose estimation. Additionally, we develop a specialized module dedicated to capturing facial expressions, which are critical for conveying essential linguistic information in signed languages. We then combine the above methods to achieve anonymization that better preserves the essential linguistic content of the original signer. This innovative methodology makes possible, for the first time, sign language video anonymization that could be used for real-world applications, which would offer significant benefits to the Deaf and Hard-of-Hearing communities. We demonstrate the effectiveness of our approach with a series of signer anonymization experiments.more » « less