skip to main content


This content will become publicly available on October 4, 2024

Title: Exploring Strategies for Modeling Sign Language Phonology
Like speech, signs are composed of discrete, recombinable features called phonemes. Prior work shows that models which can recognize phonemes are better at sign recognition, motivating deeper exploration into strategies for modeling sign language phonemes. In this work, we learn graph convolution networks to recognize the sixteen phoneme “types” found in ASL-LEX2.0. Specifically, we explore how learning strategies like multi-task and curriculum learning can leverage mutually useful information between phoneme types to facilitate the remodeling of sign language phonemes. Results on the Sem-Lex Benchmark show that curriculum learning yields an average accuracy of 87% across all phoneme types, outperforming fine-tuning and multi-task strategies for most phonemetypes.  more » « less
Award ID(s):
1918556
NSF-PAR ID:
10481700
Author(s) / Creator(s):
; ; ; ; ;
Publisher / Repository:
Ciaco - i6doc.com
Date Published:
Journal Name:
ESANN 2023 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning
ISBN:
978-2-87587-088-9
Page Range / eLocation ID:
577 to 582
Subject(s) / Keyword(s):
sign language, phonology, isolated sign recognition, American Sign Language, ASL
Format(s):
Medium: X
Location:
Bruges (Belgium) and online
Sponsoring Org:
National Science Foundation
More Like this
  1. Muresan, Smaranda ; Nakov, Preslav ; Villavicencio, Aline (Ed.)
    Phonemes are defined by their relationship to words: changing a phoneme changes the word. Learning a phoneme inventory with little supervision has been a longstanding challenge with important applications to under-resourced speech technology. In this paper, we bridge the gap between the linguistic and statistical definition of phonemes and propose a novel neural discrete representation learning model for self-supervised learning of phoneme inventory with raw speech and word labels. Under mild assumptions, we prove that the phoneme inventory learned by our approach converges to the true one with an exponentially low error rate. Moreover, in experiments on TIMIT and Mboshi benchmarks, our approach consistently learns a better phoneme-level representation and achieves a lower error rate in a zero-resource phoneme recognition task than previous state-of-the-art self-supervised representation learning algorithms. 
    more » « less
  2. null (Ed.)
    Abstract Applied linguistic work claims that multilinguals’ non-native languages interfere with one another based on similarities in cognitive factors like proficiency or age of acquisition. Two experiments explored how trilinguals regulate control of native- and non-native-language words. Experiment 1 tested 46 Dutch–English–French trilinguals in a monitoring task. Participants decided if phonemes were present in the target language name of a picture, phonemes of non-target language translations resulted in longer response times and more false alarms compared to phonemes not present in any translation (Colomé, 2001). The second language (English) interfered more than the first (Dutch) when trilinguals monitored in their third language (French). In Experiment 2, 95 bilinguals learned an artificial language to explore the possibility that the language from which a bilingual learns a third language provides practice managing known-language interference. Language of instruction modulated results, suggesting that learning conditions may reduce interference effects previously attributed to cognitive factors. 
    more » « less
  3. We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Convolution Network (GCN) architecture for modeling human skeleton keypoints with late fusion of both the forward and backward video streams, and we explore the use of curriculum learning. We employ a type of curriculum learning that dynamically estimates, during training, the order of difficulty of each input video for sign recognition; this involves learning a new family of data parameters that are dynamically updated during training. The research makes use of a large combined video dataset for American Sign Language (ASL), including data from both the American Sign Language Lexicon Video Dataset (ASLLVD) and the Word-Level American Sign Language (WLASL) dataset, with modified gloss labeling of the latter—to ensure 1-1 correspondence between gloss labels and distinct sign productions, as well as consistency in gloss labeling across the two datasets. This is the first time that these two datasets have been used in combination for isolated sign recognition research. We also compare the sign recognition performance on several different subsets of the combined dataset, varying in, e.g., the minimum number of samples per sign (and therefore also in the total number of sign classes and video examples). 
    more » « less
  4. We present a new approach for isolated sign recognition, which combines a spatial-temporal Graph Convolution Network (GCN) architecture for modeling human skeleton keypoints with late fusion of both the forward and backward video streams, and we explore the use of curriculum learning. We employ a type of curriculum learning that dynamically estimates, during training, the order of difficulty of each input video for sign recognition; this involves learning a new family of data parameters that are dynamically updated during training. The research makes use of a large combined video dataset for American Sign Language (ASL), including data from both the American Sign Language Lexicon Video Dataset (ASLLVD) and the Word-Level American Sign Language (WLASL) dataset, with modified gloss labeling of the latter—to ensure 1-1 correspondence between gloss labels and distinct sign productions, as well as consistency in gloss labeling across the two datasets. This is the first time that these two datasets have been used in combination for isolated sign recognition research. We also compare the sign recognition performance on several different subsets of the combined dataset, varying in, e.g., the minimum number of samples per sign (and therefore also in the total number of sign classes and video examples). 
    more » « less
  5. A true interpreting agent not only understands sign language and translates to text, but also understands text and translates to signs. Much of the AI work in sign language translation to date has focused mainly on translating from signs to text. Towards the latter goal, we propose a text-to-sign translation model, SignNet, which exploits the notion of similarity (and dissimilarity) of visual signs in translating. This module presented is only one part of a dual-learning two task process involving text-to-sign (T2S) as well as sign-to-text (S2T). We currently implement SignNet as a single channel architecture so that the output of the T2S task can be fed into S2T in a continuous dual learning framework. By single channel, we refer to a single modality, the body pose joints. In this work, we present SignNet, a T2S task using a novel metric embedding learning process, to preserve the distances between sign embeddings relative to their dissimilarity. We also describe how to choose positive and negative examples of signs for similarity testing. From our analysis, we observe that metric embedding learning-based model perform significantly better than the other models with traditional losses, when evaluated using BLEU scores. In the task of gloss to pose, SignNet performed as well as its state-of-the-art (SoTA) counterparts and outperformed them in the task of text to pose, by showing noteworthy enhancements in BLEU 1 - BLEU 4 scores (BLEU 1: 31 → 39; ≈26% improvement and BLEU 4: 10.43 →11.84; ≈14% improvement) when tested on the popular RWTH PHOENIX-Weather-2014T benchmark dataset 
    more » « less