skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2238653

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract Over the past decade, there have been great advancements in radio frequency sensor technology for human–computer interaction applications, such as gesture recognition, and human activity recognition more broadly. While there is a significant amount of study on these topics, in most cases, experimental data are acquired in controlled settings by directing participants what motion to articulate. However, especially for communicative motions, such as sign language, such directed data sets do not accurately capture natural, in situ articulations. This results in a difference in the distribution of directed American Sign Language (ASL) versus natural ASL, which severely degrades natural sign language recognition in real‐world scenarios. To overcome these challenges and acquire more representative data for training deep models, the authors develop an interactive gaming environment, ChessSIGN, which records video and radar data of participants as they play the gamewithout any external direction. The authors investigate various ways of generating synthetic samples from directed ASL data, but show that ultimately such data does not offer much improvement over just initialising using imagery from ImageNet. In contrast, an interactive learning paradigm is proposed by the authors in which model training is shown to improve as more and more natural ASL samples are acquired and augmented via synthetic samples generated from a physics‐aware generative adversarial network. The authors show that the proposed approach enables the recognition of natural ASL in a real‐world setting, achieving an accuracy of 69% for 29 ASL signs—a 60% improvement over conventional training with directed ASL data. 
    more » « less
  2. Free, publicly-accessible full text available May 3, 2026
  3. Hedden, Abigail S; Mazzaro, Gregory J (Ed.)
    Human activity recognition (HAR) with radar-based technologies has become a popular research area in the past decade. However, the objective of these studies are often to classify human activity for anyone; thus, models are trained using data spanning as broad a swath of people and mobility profiles as possible. In contrast, applications of HAR and gait analysis to remote health monitoring require characterization of the person-specific qualities of a person’s activities and gait, which greatly depends on age, health and agility. In fact, the speed or agility with which a person moves can be an important health indicator. In this study, we propose a multi-input multi-task deep learning framework to simultaneously learn a person’s activity and agility. In this initial study, we consider three different agility states: slow, nominal, and fast. It is shown that joint learning of agility and activity improves the classification accuracy for both activity and agility recognition tasks. To the best of our knowledge, this study is the first work considering both agility characterization and personalized activity recognition using RF sensing. 
    more » « less
  4. Radar-based recognition of human activities of daily living has been a focus of research for over a decade. Current techniques focus on generalized motion recognition of any person and rely on massive amounts of data to characterize generic human activity. However, human gait is actually a person-specific biometric, correlated with health and agility, which depends on a person’s mobility ethogram. This paper proposes a multi-input multi-task deep learning framework for jointly learning a person’s agility and activity. As a proof of concept, we consider three categories of agility represented by slow, fast and nominal motion articulations and show that joint consideration of agility and activity can lead to improved activity classification accuracy and estimation of agility. To the best of our knowledge, this work represents the first work considering personalized motion recognition and agility characterization using radar. 
    more » « less