skip to main content


Search for: All records

Award ID contains: 1932547

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Current radio frequency (RF) classification techniques assume only one target in the field of view. Multi‐target recognition is challenging because conventional radar signal processing results in the superposition of target micro‐Doppler signatures, making it difficult to recognise multi‐target activity. This study proposes an angular subspace projection technique that generates multiple radar data cubes (RDC) conditioned on angle (RDC‐ω). This approach enables signal separation in the raw RDC, making possible the utilisation of deep neural networks taking the raw RF data as input or any other data representation in multi‐target scenarios. When targets are in closer proximity and cannot be separated by classical techniques, the proposed approach boosts the relative signal‐to‐noise ratio between targets, resulting in multi‐view spectrograms that boosts the classification accuracy when input to the proposed multi‐view DNN. Our results qualitatively and quantitatively characterise the similarity of multi‐view signatures to those acquired in a single‐target configuration. For a nine‐class activity recognition problem, 97.8% accuracy in a 3‐person scenario is achieved, while utilising DNN trained on single‐target data. We also present the results for two cases of close proximity (sign language recognition and side‐by‐side activities), where the proposed approach has boosted the performance.

     
    more » « less
  2. Free, publicly-accessible full text available June 1, 2024
  3. Free, publicly-accessible full text available May 1, 2024
  4. Free, publicly-accessible full text available May 1, 2024
  5. Free, publicly-accessible full text available May 1, 2024
  6. Free, publicly-accessible full text available May 1, 2024
  7. Abstract Sign languages are human communication systems that are equivalent to spoken language in their capacity for information transfer, but which use a dynamic visual signal for communication. Thus, linguistic metrics of complexity, which are typically developed for linear, symbolic linguistic representation (such as written forms of spoken languages) do not translate easily into sign language analysis. A comparison of physical signal metrics, on the other hand, is complicated by the higher dimensionality (spatial and temporal) of the sign language signal as compared to a speech signal (solely temporal). Here, we review a variety of approaches to operationalizing sign language complexity based on linguistic and physical data, and identify the approaches that allow for high fidelity modeling of the data in the visual domain, while capturing linguistically-relevant features of the sign language signal. 
    more » « less