skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on April 2, 2026

Title: Idioms and other constructions in American Sign Language
Idioms are phrases like English [hit the sack], meaning ‘go to bed’. For linguists working with sign languages, a question arises: “What do idioms look like in a sign language?” This paper proposes a definition of idiom that can be used to identify idioms across languages. Idioms are affective constructions, they are phrasal units, and they are conventional expressions for members of a language community. This definition is used to identify idioms in ASL such as [keep.quiet hard] ‘just have to accept it’. This approach to idioms motivates a constructionist approach to ASL grammar in general, in which all aspects of linguistic knowledge can be represented as meaning-form pairs that vary in their complexity and schematicity.  more » « less
Award ID(s):
2141363
PAR ID:
10637285
Author(s) / Creator(s):
Publisher / Repository:
De Gruyter Mouton
Date Published:
Journal Name:
Cognitive Linguistics
Volume:
36
Issue:
2
ISSN:
0936-5907
Page Range / eLocation ID:
183 to 225
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The view that words are arbitrary is a foundational assumption about language, used to set human languages apart from nonhuman communication. We present here a study of the alignment between the semantic and phonological structure (systematicity) of American Sign Language (ASL), and for comparison, two spoken languages—English and Spanish. Across all three languages, words that are semantically related are more likely to be phonologically related, highlighting systematic alignment between word form and word meaning. Critically, there is a significant effect of iconicity (a perceived physical resemblance between word form and word meaning) on this alignment: words are most likely to be phonologically related when they are semantically related and iconic. This phenomenon is particularly widespread in ASL: half of the signs in the ASL lexicon areiconicallyrelated to other signs, i.e., there is a nonarbitrary relationship between form and meaning that is shared across signs. Taken together, the results reveal that iconicity can act as a driving force behind the alignment between the semantic and phonological structure of spoken and signed languages, but languages may differ in the extent that iconicity structures the lexicon. Theories of language must account for iconicity as a possible organizing principle of the lexicon. 
    more » « less
  2. Abstract: In many descriptions of American Sign Language (ASL), signs like [breakfast] are identified as compounds . These signs were once formed with two separate signs but have since fused into a single unit. This article presents an alternative definition of compound that includes both functional and formal properties. Following this updated definition, examples of ASL compounds are constructions like [name sign] and [sign language], which combine two object-concept words to name an object concept, as well as related constructions like [formal room] 'living room,' which also label object concepts. The updated definition of compound allows for terminological consistency and sets the stage for fuller understanding of the variety of multisign units in ASL. 
    more » « less
  3. Despite some prior research and commercial systems, if someone sees an unfamiliar American Sign Language (ASL) word and wishes to look up its meaning in a dictionary, this remains a difficult task. There is no standard label a user can type to search for a sign, and formulating a query based on linguistic properties is challenging for students learning ASL. Advances in sign-language recognition technology will soon enable the design of a search system for ASL word look-up in dictionaries, by allowing users to generate a query by submitting a video of themselves performing the word they believe they encountered somewhere. Users would then view a results list of video clips or animations, to seek the desired word. In this research, we are investigating the usability of such a proposed system, a webcam-based ASL dictionary system, using a Wizard-of-Oz prototype and enhanced the design so that it can support sign language word look-up even when the performance of the underlying sign-recognition technology is low. We have also investigated the requirements of students learning ASL in regard to how results should be displayed and how a system could enable them to filter the results of the initial query, to aid in their search for a desired word. We compared users’ satisfaction when using a system with or without post-query filtering capabilities. We discuss our upcoming study to investigate users’ experience with a working prototype based on actual sign-recognition technology that is being designed. Finally, we discuss extensions of this work to the context of users searching datasets of videos of other human movements, e.g. dance moves, or when searching for words in other languages. 
    more » « less
  4. Many sign languages are bona fide natural languages with grammatical rules and lexicons hence can benefit from machine translation methods. Similarly, since sign language is a visual-spatial language, it can also benefit from computer vision methods for encoding it. With the advent of deep learning methods in recent years, significant advances have been made in natural language processing (specifically neural machine translation) and in computer vision methods (specifically image and video captioning). Researchers have therefore begun expanding these learning methods to sign language understanding. Sign language interpretation is especially challenging, because it involves a continuous visual-spatial modality where meaning is often derived based on context. The focus of this article, therefore, is to examine various deep learning–based methods for encoding sign language as inputs, and to analyze the efficacy of several machine translation methods, over three different sign language datasets. The goal is to determine which combinations are sufficiently robust for sign language translation without any gloss-based information. To understand the role of the different input features, we perform ablation studies over the model architectures (input features + neural translation models) for improved continuous sign language translation. These input features include body and finger joints, facial points, as well as vector representations/embeddings from convolutional neural networks. The machine translation models explored include several baseline sequence-to-sequence approaches, more complex and challenging networks using attention, reinforcement learning, and the transformer model. We implement the translation methods over multiple sign languages—German (GSL), American (ASL), and Chinese sign languages (CSL). From our analysis, the transformer model combined with input embeddings from ResNet50 or pose-based landmark features outperformed all the other sequence-to-sequence models by achieving higher BLEU2-BLEU4 scores when applied to the controlled and constrained GSL benchmark dataset. These combinations also showed significant promise on the other less controlled ASL and CSL datasets. 
    more » « less
  5. Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide. However, most communication technologies operate in spoken and written languages, creating inequities in access. To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition (ISLR) dataset, collected with consent and containing 83,399 videos for 2,731 distinct signs filmed by 52 signers in a variety of environments. We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary. Through our generalizable baselines, we show that training supervised machine learning classifiers with our dataset achieves competitive performance on metrics relevant for dictionary retrieval, with 63% accuracy and a recall-at-10 of 91%, evaluated entirely on videos of users who are not present in the training or validation sets. 
    more » « less