The American Sign Language Linguistic Research Project (ASLLRP) provides Internet access to high-quality ASL video data, generally including front and side views and a close-up of the face. The manual and non-manual components of the signing have been linguistically annotated using SignStream(R). The recently expanded video corpora can be browsed and searched through the Data Access Interface (DAI 2) we have designed; it is possible to carry out complex searches. The data from our corpora can also be downloaded; annotations are available in an XML export format. We have also developed the ASLLRP Sign Bank, which contains almost 6,000 sign entries for lexical signs, with distinct English-based glosses, with a total of 41,830 examples of lexical signs (in addition to about 300 gestures, over 1,000 fingerspelled signs, and 475 classifier examples). The Sign Bank is likewise accessible and searchable on the Internet; it can also be accessed from within SignStream(R) (software to facilitate linguistic annotation and analysis of visual language data) to make annotations more accurate and efficient. Here we describe the available resources. These data have been used for many types of research in linguistics and in computer-based sign language recognition from video; examples of such research are provided in the latter part of this article.
more »
« less
Historical Linguistics of Sign Languages: Progress and Problems
In contrast to scholars and signers in the nineteenth century, William Stokoe conceived of American Sign Language (ASL) as a unique linguistic tradition with roots in nineteenth-century langue des signes française , a conception that is apparent in his earliest scholarship on ASL. Stokoe thus contributed to the theoretical foundations upon which the field of sign language historical linguistics would later develop. This review focuses on the development of sign language historical linguistics since Stokoe, including the field's significant progress and the theoretical and methodological problems that it still faces. The review examines the field's development through the lens of two related problems pertaining to how we understand sign language relationships and to our understanding of cognacy, as the term pertains to signs. It is suggested that the theoretical notions underlying these terms do not straightforwardly map onto the historical development of many sign languages. Recent approaches in sign language historical linguistics are highlighted and future directions for research are suggested to address the problems discussed in this review.
more »
« less
- Award ID(s):
- 1941560
- PAR ID:
- 10349748
- Date Published:
- Journal Name:
- Frontiers in Psychology
- Volume:
- 13
- ISSN:
- 1664-1078
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
null (Ed.)We report on the development of ASLNet, a wordnet for American Sign Language (ASL). ASLNet V1.0 is currently under construction by mapping easy-to-translate ASL lexical nouns to Princeton WordNet synsets. We describe our data model and mapping approach, which can be extended to any sign language. Analysis of the 390 synsets processed to date indicates the success of our procedure yet also highlights the need to supplement our mapping with the “merge” method. We outline our plans for upcoming work to remedy this, which include use of ASL free-association data.more » « less
-
To make it easier to add American Sign Language (ASL) to websites, which would increase information accessibility for many Deaf users, we investigate software to semi-automatically produce ASL animation from an easy-to-update script of the message, requiring us to automatically select the speed and timing for the animation. While we can model speed and timing of human signers from video recordings, prior work has suggested that users prefer animations to be slower than videos of humans signers. However, no prior study had systematically examined the multiple parameters of ASL timing, which include: sign duration, transition time, pausing frequency, pausing duration, and differential signing rate. In an experimental study, 16 native ASL signers provided subjective preference judgements during a side-by-side comparison of ASL animations in which each of these five parameters was varied. We empirically identified and report users' preferences for each of these individual timing parameters of ASL animation.more » « less
-
Abstract To understand human language—both spoken and signed—the listener or viewer has to parse the continuous external signal into components. The question of what those components are (e.g., phrases, words, sounds, phonemes?) has been a subject of long‐standing debate. We re‐frame this question to ask: What properties of the incoming visual or auditory signal are indispensable to eliciting language comprehension? In this review, we assess the phenomenon of language parsing from modality‐independent viewpoint. We show that the interplay between dynamic changes in the entropy of the signal and between neural entrainment to the signal at syllable level (4–5 Hz range) is causally related to language comprehension in both speech and sign language. This modality‐independent Entropy Syllable Parsing model for the linguistic signal offers insight into the mechanisms of language processing, suggesting common neurocomputational bases for syllables in speech and sign language. This article is categorized under:Linguistics > Linguistic TheoryLinguistics > Language in Mind and BrainLinguistics > Computational Models of LanguagePsychology > Languagemore » « less
-
Sign language is a complex visual language, and automatic interpretations of sign language can facilitate communication involving deaf individuals. As one of the essential components of sign language, fingerspelling connects the natural spoken languages to the sign language and expands the scale of sign language vocabulary. In practice, it is challenging to analyze fingerspelling alphabets due to their signing speed and small motion range. The usage of synthetic data has the potential of further improving fingerspelling alphabets analysis at scale. In this paper, we evaluate how different video-based human representations perform in a framework for Alphabet Generation for American Sign Language (ASL). We tested three mainstream video-based human representations: twostream inflated 3D ConvNet, 3D landmarks of body joints, and rotation matrices of body joints. We also evaluated the effect of different skeleton graphs and selected body joints. The generation process of ASL fingerspelling used a transformerbased Conditional Variational Autoencoder. To train the model, we collected ASL alphabet signing videos from 17 signers with dynamic alphabet signing. The generated alphabets were evaluated using automatic metrics of quality such as FID, and we also considered supervised metrics by recognizing the generated entries using Spatio-Temporal Graph Convolutional Networks. Our experiments show that using the rotation matrices of the upper body joints and the signing hand give the best results for the generation of ASL alphabet signing. Going forward, our goal is to produce articulated fingerspelling words by combining individual alphabets learned in this work.more » « less
An official website of the United States government

