skip to main content


This content will become publicly available on June 21, 2025

Title: Universal Sign Language Recognition System Using Gesture Description Generation and Large Language Model
Award ID(s):
2245608
NSF-PAR ID:
10526014
Author(s) / Creator(s):
; ;
Publisher / Repository:
Proc. The 18th International Conference on Wireless Artificial Intelligent Computing Systems and Applications (WASA 2024)
Date Published:
Format(s):
Medium: X
Location:
Qingdao, China
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    Abstract Applied linguistic work claims that multilinguals’ non-native languages interfere with one another based on similarities in cognitive factors like proficiency or age of acquisition. Two experiments explored how trilinguals regulate control of native- and non-native-language words. Experiment 1 tested 46 Dutch–English–French trilinguals in a monitoring task. Participants decided if phonemes were present in the target language name of a picture, phonemes of non-target language translations resulted in longer response times and more false alarms compared to phonemes not present in any translation (Colomé, 2001). The second language (English) interfered more than the first (Dutch) when trilinguals monitored in their third language (French). In Experiment 2, 95 bilinguals learned an artificial language to explore the possibility that the language from which a bilingual learns a third language provides practice managing known-language interference. Language of instruction modulated results, suggesting that learning conditions may reduce interference effects previously attributed to cognitive factors. 
    more » « less
  2. Pretrained language models often do not perform tasks in ways that are in line with our preferences, e.g., generating offensive text or factually incorrect summaries. Recent work approaches the above issue by learning from a simple form of human evaluation: comparisons between pairs of model-generated task outputs. Comparison feedback conveys limited information about human preferences per human evaluation. Here, we propose to learn from natural language feedback, which conveys more information per human evaluation. We learn from language feedback on model outputs using a three-step learning algorithm. First, we condition the language model on the initial output and feedback to generate many refinements. Second, we choose the refinement with the highest similarity to the feedback. Third, we finetune a language model to maximize the likelihood of the chosen refinement given the input. In synthetic experiments, we first evaluate whether language models accurately incorporate feedback to produce refinements, finding that only large language models (175B parameters) do so. Using only 100 samples of human-written feedback, our learning algorithm finetunes a GPT-3 model to roughly human-level summarization ability. 
    more » « less
  3. null (Ed.)
    Learning the meaning of grounded language---language that references a robot’s physical environment and perceptual data---is an important and increasingly widely studied problem in robotics and human-robot interaction. However, with a few exceptions, research in robotics has focused on learning groundings for a single natural language pertaining to rich perceptual data. We present experiments on taking an existing natural language grounding system designed for English and applying it to a novel multilingual corpus of descriptions of objects paired with RGB-D perceptual data. We demonstrate that this specific approach transfers well to different languages, but also present possible design constraints to consider for grounded language learning systems intended for robots that will function in a variety of linguistic settings. 
    more » « less