skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Simultaneous structures in sign languages: Acquisition and emergence
The visual-gestural modality affords its users simultaneous movement of several independent articulators and thus lends itself to simultaneous encoding of information. Much research has focused on the fact that sign languages coordinate two manual articulators in addition to a range of non-manual articulators to present different types of linguistic information simultaneously, from phonological contrasts to inflection, spatial relations, and information structure. Children and adults acquiring a signed language arguably thus need to comprehend and produce simultaneous structures to a greater extent than individuals acquiring a spoken language. In this paper, we discuss the simultaneous encoding that is found in emerging and established sign languages; we also discuss places where sign languages are unexpectedly sequential. We explore potential constraints on simultaneity in cognition and motor coordination that might impact the acquisition and use of simultaneous structures.  more » « less
Award ID(s):
2141436
PAR ID:
10388710
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Frontiers in Psychology
Volume:
13
ISSN:
1664-1078
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Many sign languages are bona fide natural languages with grammatical rules and lexicons hence can benefit from machine translation methods. Similarly, since sign language is a visual-spatial language, it can also benefit from computer vision methods for encoding it. With the advent of deep learning methods in recent years, significant advances have been made in natural language processing (specifically neural machine translation) and in computer vision methods (specifically image and video captioning). Researchers have therefore begun expanding these learning methods to sign language understanding. Sign language interpretation is especially challenging, because it involves a continuous visual-spatial modality where meaning is often derived based on context. The focus of this article, therefore, is to examine various deep learning–based methods for encoding sign language as inputs, and to analyze the efficacy of several machine translation methods, over three different sign language datasets. The goal is to determine which combinations are sufficiently robust for sign language translation without any gloss-based information. To understand the role of the different input features, we perform ablation studies over the model architectures (input features + neural translation models) for improved continuous sign language translation. These input features include body and finger joints, facial points, as well as vector representations/embeddings from convolutional neural networks. The machine translation models explored include several baseline sequence-to-sequence approaches, more complex and challenging networks using attention, reinforcement learning, and the transformer model. We implement the translation methods over multiple sign languages—German (GSL), American (ASL), and Chinese sign languages (CSL). From our analysis, the transformer model combined with input embeddings from ResNet50 or pose-based landmark features outperformed all the other sequence-to-sequence models by achieving higher BLEU2-BLEU4 scores when applied to the controlled and constrained GSL benchmark dataset. These combinations also showed significant promise on the other less controlled ASL and CSL datasets. 
    more » « less
  2. Abstract Sign languages are human communication systems that are equivalent to spoken language in their capacity for information transfer, but which use a dynamic visual signal for communication. Thus, linguistic metrics of complexity, which are typically developed for linear, symbolic linguistic representation (such as written forms of spoken languages) do not translate easily into sign language analysis. A comparison of physical signal metrics, on the other hand, is complicated by the higher dimensionality (spatial and temporal) of the sign language signal as compared to a speech signal (solely temporal). Here, we review a variety of approaches to operationalizing sign language complexity based on linguistic and physical data, and identify the approaches that allow for high fidelity modeling of the data in the visual domain, while capturing linguistically-relevant features of the sign language signal. 
    more » « less
  3. null (Ed.)
    Acquisition of natural language has been shown to fundamentally impact both one’s ability to use the first language and the ability to learn subsequent languages later in life. Sign languages offer a unique perspective on this issue because Deaf signers receive access to signed input at varying ages. The majority acquires sign language in (early) childhood, but some learn sign language later—a situation that is drastically different from that of spoken language acquisition. To investigate the effect of age of sign language acquisition and its potential interplay with age in signers, we examined grammatical acceptability ratings and reaction time measures in a group of Deaf signers (age range = 28–58 years) with early (0–3 years) or later (4–7 years) acquisition of sign language in childhood. Behavioral responses to grammatical word order variations (subject–object–verb [SOV] vs. object–subject–verb [OSV]) were examined in sentences that included (1) simple sentences, (2) topicalized sentences, and (3) sentences involving manual classifier constructions, uniquely characteristic of sign languages. Overall, older participants responded more slowly. Age of acquisition had subtle effects on acceptability ratings, whereby the direction of the effect depended on the specific linguistic structure. 
    more » « less
  4. Abstract Code-blending is the simultaneous expression of utterances using both a sign language and a spoken language. We expect that like code-switching, code-blending is linguistically constrained and thus we investigate two hypothesized constraints using an acceptability judgment task. Participants rated the acceptability of code-blended utterances designed to be consistent or inconsistent with these hypothesized constraints. We find strong support for the proposed constraint that each modality of code-blended utterances contributes content to a single proposition. We also find support for the proposed constraint that – at least for American Sign Language (ASL) and English – code-blended utterances make use of a single derivation which is realized using surface forms in the two languages, rather than two simultaneous derivations, one for each language. While this study was limited to ASL/English code-blending and further investigation is needed, we hope that this novel study will encourage future research comparing linguistic constraints on code-blending and code-switching. 
    more » « less
  5. In this work, we address structural, iconic and social dimensions of the emergence of phonological systems in two emerging sign languages. A comparative analysis is conducted of data from a village sign language (Central Taurus Sign Language; CTSL) and a community sign language (Nicaraguan Sign Language; NSL). Both languages are approximately 50 years old, but the sizes and social structures of their respective communities are quite different. We find important differences between the two languages’ handshape inventories. CTSL’s handshape inventory has changed more slowly than NSL’s across the same time period. In addition, while the inventories of the two languages are of similar size, handshape complexity is higher in NSL than in CTSL. This work provides an example of the unique and important perspective that emerging sign languages offer regarding longstanding questions about how phonological systems emerge. 
    more » « less