Unbounded productivity is a hallmark of linguistic competence. Here, we asked whether this capacity automatically applies to signs. Participants saw video-clips of novel signs in American Sign Language (ASL) produced by a signer whose body appeared in a monochromatic color, and they quickly identified the signs’ color. The critical manipulation compared reduplicative (αα) signs to non-reduplicative (αβ) controls. Past research has shown that reduplication is frequent in ASL, and frequent structures elicit stronger Stroop interference. If signers automatically generalize the reduplication function, then αα signs should elicit stronger color-naming interference. Results showed no effect of reduplication for signs whose basemore »
PoseASL: An RGBD Dataset of American Sign Language
Abstract
The PoseASL dataset consists of color and depth videos collected from ASL signers at the Linguistic and Assistive Technologies Laboratory under the direction of Matt Huenerfauth, as part of- Publisher:
- Databrary
- Publication Year:
- NSF-PAR ID:
- 10322980
- Award ID(s):
- 1749376
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Convolution Network (GCN) architecture for modeling human skeleton keypoints with late fusion of both the forward and backward video streams, and we explore the use of curriculum learning. We employ a type of curriculum learning that dynamically estimates, during training, the order of difficulty of each input video for sign recognition; this involves learning a new family of data parameters that are dynamically updated during training. The research makes use of a large combined video dataset for American Sign Language (ASL), including data from both themore »
-
While a significant amount of work has been done on the commonly used, tightly -constrained weather-based, German sign language (GSL) dataset, little has been done for continuous sign language translation (SLT) in more realistic settings, including American sign language (ASL) translation. Also, while CNN - based features have been consistently shown to work well on the GSL dataset, it is not clear whether such features will work as well in more realistic settings when there are more heterogeneous signers in non-uniform backgrounds. To this end, in this work, we introduce a new, realistic phrase-level ASL dataset (ASLing), and explore themore »
-
We report on the high success rates of our new, scalable, computational approach for sign recognition from monocular video, exploiting linguistically annotated ASL datasets with multiple signers. We recognize signs using a hybrid framework combining state-of-the-art learning methods with features based on what is known about the linguistic composition of lexical signs. We model and recognize the sub-components of sign production, with attention to hand shape, orientation, location, motion trajectories, plus non-manual features, and we combine these within a CRF framework. The effect is to make the sign recognition problem robust, scalable, and feasible with relatively smaller datasets than aremore »
-
We report on the high success rates of our new, scalable, computational approach for sign recognition from monocular video, exploiting linguistically annotated ASL datasets with multiple signers. We recognize signs using a hybrid framework combining state-of-the-art learning methods with features based on what is known about the linguistic composition of lexical signs. We model and recognize the sub-components of sign production, with attention to hand shape, orientation, location, motion trajectories, plus non-manual features, and we combine these within a CRF framework. The effect is to make the sign recognition problem robust, scalable, and feasible with relatively smaller datasets than aremore »