Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
Abstract Over the past decade, there have been great advancements in radio frequency sensor technology for human–computer interaction applications, such as gesture recognition, and human activity recognition more broadly. While there is a significant amount of study on these topics, in most cases, experimental data are acquired in controlled settings by directing participants what motion to articulate. However, especially for communicative motions, such as sign language, such directed data sets do not accurately capture natural, in situ articulations. This results in a difference in the distribution of directed American Sign Language (ASL) versus natural ASL, which severely degrades natural sign language recognition in real‐world scenarios. To overcome these challenges and acquire more representative data for training deep models, the authors develop an interactive gaming environment, ChessSIGN, which records video and radar data of participants as they play the game
without any external direction . The authors investigate various ways of generating synthetic samples from directed ASL data, but show that ultimately such data does not offer much improvement over just initialising using imagery from ImageNet. In contrast, an interactive learning paradigm is proposed by the authors in which model training is shown to improve as more and more natural ASL samples are acquired and augmented via synthetic samples generated from a physics‐aware generative adversarial network. The authors show that the proposed approach enables the recognition of natural ASL in a real‐world setting, achieving an accuracy of 69% for 29 ASL signs—a 60% improvement over conventional training with directed ASL data. -
Abstract Sign languages are human communication systems that are equivalent to spoken language in their capacity for information transfer, but which use a dynamic visual signal for communication. Thus, linguistic metrics of complexity, which are typically developed for linear, symbolic linguistic representation (such as written forms of spoken languages) do not translate easily into sign language analysis. A comparison of physical signal metrics, on the other hand, is complicated by the higher dimensionality (spatial and temporal) of the sign language signal as compared to a speech signal (solely temporal). Here, we review a variety of approaches to operationalizing sign language complexity based on linguistic and physical data, and identify the approaches that allow for high fidelity modeling of the data in the visual domain, while capturing linguistically-relevant features of the sign language signal.more » « less
-
null (Ed.)RF sensing based human activity and hand gesture recognition (HGR) methods have gained enormous popularity with the development of small package, high frequency radar systems and powerful machine learning tools. However, most HGR experiments in the literature have been conducted on individual gestures and in isolation from preceding and subsequent motions. This paper considers the problem of American sign language (ASL) recognition in the context of daily living, which involves sequential classification of a continuous stream of signing mixed with daily activities. In particular, this paper investigates the efficacy of different RF input representations and fusion techniques for ASL and trigger gesture recognition tasks in a daily living scenario, which can be potentially used for sign language sensitive human-computer interfaces (HCI). The proposed approach involves first detecting and segmenting periods of motion, followed by feature level fusion of the range-Doppler map, micro-Doppler spectrogram, and envelope for classification with a bi-directional long short-term memory (BiL-STM) recurrent neural network. Results show 93.3% accuracy in identification of 6 activities and 4 ASL signs, as well as a trigger sign detection rate of 0.93.more » « less
-
null (Ed.)Over the years, there has been much research in both wearable and video-based American Sign Language (ASL) recognition systems. However, the restrictive and invasive nature of these sensing modalities remains a significant disadvantage in the context of Deaf-centric smart environments or devices that are responsive to ASL. This paper investigates the efficacy of RF sensors for word-level ASL recognition in support of human-computer interfaces designed for deaf or hard-of-hearing individuals. A principal challenge is the training of deep neural networks given the difficulty in acquiring native ASL signing data. In this paper, adversarial domain adaptation is exploited to bridge the physical/kinematic differences between the copysigning of hearing individuals (repetition of sign motion after viewing a video), and native signing of Deaf individuals who are fluent in sign language. Domain adaptation results are compared with those attained by directly synthesizing ASL signs using generative adversarial networks (GANs). Kinematic improvements to the GAN architecture, such as the insertion of micro-Doppler signature envelopes in a secondary branch of the GAN, are utilized to boost performance. Word-level classification accuracy of 91.3% is achieved for 20 ASL words.more » « less