Idioms are phrases like English [hit the sack], meaning ‘go to bed’. For linguists working with sign languages, a question arises: “What do idioms look like in a sign language?” This paper proposes a definition of idiom that can be used to identify idioms across languages. Idioms are affective constructions, they are phrasal units, and they are conventional expressions for members of a language community. This definition is used to identify idioms in ASL such as [keep.quiet hard] ‘just have to accept it’. This approach to idioms motivates a constructionist approach to ASL grammar in general, in which all aspects of linguistic knowledge can be represented as meaning-form pairs that vary in their complexity and schematicity.
more »
« less
Identifying ASL Compounds: A Functionalist Approach
Abstract: In many descriptions of American Sign Language (ASL), signs like [breakfast] are identified as compounds . These signs were once formed with two separate signs but have since fused into a single unit. This article presents an alternative definition of compound that includes both functional and formal properties. Following this updated definition, examples of ASL compounds are constructions like [name sign] and [sign language], which combine two object-concept words to name an object concept, as well as related constructions like [formal room] 'living room,' which also label object concepts. The updated definition of compound allows for terminological consistency and sets the stage for fuller understanding of the variety of multisign units in ASL.
more »
« less
- Award ID(s):
- 2141363
- PAR ID:
- 10458384
- Date Published:
- Journal Name:
- Sign Language Studies
- Volume:
- 23
- Issue:
- 4
- ISSN:
- 1533-6263
- Page Range / eLocation ID:
- 461 to 499
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Picture-naming tasks provide critical data for theories of lexical representation and retrieval and have been performed successfully in sign languages. However, the specific influences of lexical or phonological factors and stimulus properties on sign retrieval are poorly understood. To examine lexical retrieval in American Sign Language (ASL), we conducted a timed picture-naming study using 524 pictures (272 objects and 251 actions). We also compared ASL naming with previous data for spoken English for a subset of 425 pictures. Deaf ASL signers named object pictures faster and more consistently than action pictures, as previously reported for English speakers. Lexical frequency, iconicity, better name agreement, and lower phonological complexity each facilitated naming reaction times (RT)s. RTs were also faster for pictures named with shorter signs (measured by average response duration). Target name agreement was higher for pictures with more iconic and shorter ASL names. The visual complexity of pictures slowed RTs and decreased target name agreement. RTs and target name agreement were correlated for ASL and English, but agreement was lower for ASL, possibly due to the English bias of the pictures. RTs were faster for ASL, which we attributed to a smaller lexicon. Overall, the results suggest that models of lexical retrieval developed for spoken languages can be adopted for signed languages, with the exception that iconicity should be included as a factor. The open-source picture-naming data set for ASL serves as an important, first-of-its-kind resource for researchers, educators, or clinicians for a variety of research, instructional, or assessment purposes.more » « less
-
null (Ed.)Abstract ASL-LEX is a publicly available, large-scale lexical database for American Sign Language (ASL). We report on the expanded database (ASL-LEX 2.0) that contains 2,723 ASL signs. For each sign, ASL-LEX now includes a more detailed phonological description, phonological density and complexity measures, frequency ratings (from deaf signers), iconicity ratings (from hearing non-signers and deaf signers), transparency (“guessability”) ratings (from non-signers), sign and videoclip durations, lexical class, and more. We document the steps used to create ASL-LEX 2.0 and describe the distributional characteristics for sign properties across the lexicon and examine the relationships among lexical and phonological properties of signs. Correlation analyses revealed that frequent signs were less iconic and phonologically simpler than infrequent signs and iconic signs tended to be phonologically simpler than less iconic signs. The complete ASL-LEX dataset and supplementary materials are available at https://osf.io/zpha4/ and an interactive visualization of the entire lexicon can be accessed on the ASL-LEX page: http://asl-lex.org/.more » « less
-
Sign words are the building blocks of any sign language. In this work, we present wSignGen, a word-conditioned 3D American Sign Language (ASL) generation model dedicated to synthesizing realistic and grammatically accurate motion sequences for sign words. Our approach leverages a transformer-based diffusion model, trained on a curated dataset of 3D motion meshes from word-level ASL videos. By integrating CLIP, wSignGen offers two advantages: image-based generation, which is particularly useful for children learning sign language but not yet able to read, and the ability to generalize to unseen synonyms. Experiments demonstrate that wSignGen significantly outperforms the baseline model in the task of sign word generation. Moreover, human evaluation experiments show that wSignGen can generate high-quality, grammatically correct ASL signs effectively conveyed through 3D avatars.more » « less
-
Like speech, signs are composed of discrete, recombinable features called phonemes. Prior work shows that models which can recognize phonemes are better at sign recognition, motivating deeper exploration into strategies for modeling sign language phonemes. In this work, we learn graph convolution networks to recognize the sixteen phoneme “types” found in ASL-LEX2.0. Specifically, we explore how learning strategies like multi-task and curriculum learning can leverage mutually useful information between phoneme types to facilitate the remodeling of sign language phonemes. Results on the Sem-Lex Benchmark show that curriculum learning yields an average accuracy of 87% across all phoneme types, outperforming fine-tuning and multi-task strategies for most phonemetypes.more » « less
An official website of the United States government

