skip to main content


Search for: All records

Creators/Authors contains: "Gurbuz, Sevgi Z."

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Radar-based recognition of human activities of daily living has been a focus of research for over a decade. Current techniques focus on generalized motion recognition of any person and rely on massive amounts of data to characterize generic human activity. However, human gait is actually a person-specific biometric, correlated with health and agility, which depends on a person’s mobility ethogram. This paper proposes a multi-input multi-task deep learning framework for jointly learning a person’s agility and activity. As a proof of concept, we consider three categories of agility represented by slow, fast and nominal motion articulations and show that joint consideration of agility and activity can lead to improved activity classification accuracy and estimation of agility. To the best of our knowledge, this work represents the first work considering personalized motion recognition and agility characterization using radar. 
    more » « less
    Free, publicly-accessible full text available May 6, 2025
  2. Free, publicly-accessible full text available April 24, 2025
  3. Free, publicly-accessible full text available January 1, 2025
  4. Abstract Sign languages are human communication systems that are equivalent to spoken language in their capacity for information transfer, but which use a dynamic visual signal for communication. Thus, linguistic metrics of complexity, which are typically developed for linear, symbolic linguistic representation (such as written forms of spoken languages) do not translate easily into sign language analysis. A comparison of physical signal metrics, on the other hand, is complicated by the higher dimensionality (spatial and temporal) of the sign language signal as compared to a speech signal (solely temporal). Here, we review a variety of approaches to operationalizing sign language complexity based on linguistic and physical data, and identify the approaches that allow for high fidelity modeling of the data in the visual domain, while capturing linguistically-relevant features of the sign language signal. 
    more » « less