skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Popa, Dan"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available August 28, 2025
  2. This work proposes a modular learning framework (MIRA) for rehabilitation robots based on a new deep recurrent neural network (RNN) that achieves adaptive multi-joint motion imitation. The RNN is fed with the fundamental frequencies as well as the ranges of the joint trajectories, in order to predict the future joint trajectories of the robot. The proposed framework also uses a Segment Online Dynamic Time Warping (SODTW) algorithm to quantify the closeness between the robot and patient motion. The SODTW cost decides the amount of modification needed in the inputs to our deep RNN network, which in turn adapts the robot movements. By keeping the prediction mechanism (RNN) and adaptation mechanism (SODTW) separate, the framework achieves modularity, flexibility, and scalability. We tried both Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) RNN architectures within our proposed framework. Experiments involved a group of 15 human subjects performing a range of motion tasks in conjunction with our social robot, Zeno. Comparative analysis of the results demonstrated the superior performance of the LSTM RNN across multiple task variations, highlighting its enhanced capability for adaptive motion imitation. 
    more » « less
    Free, publicly-accessible full text available August 1, 2025
  3. Free, publicly-accessible full text available August 28, 2025
  4. This paper presents an attention-based, deep learning framework that converts robot camera frames with dynamic content into static frames to more easily apply simultaneous localization and mapping (SLAM) algorithms. The vast majority of SLAM methods have difficulty in the presence of dynamic objects appearing in the environment and occluding the area being captured by the camera. Despite past attempts to deal with dynamic objects, challenges remain to reconstruct large, occluded areas with complex backgrounds. Our proposed Dynamic-GAN framework employs a generative adversarial network to remove dynamic objects from a scene and inpaint a static image free of dynamic objects. The Dynamic-GAN framework utilizes spatial-temporal transformers, and a novel spatial-temporal loss function. The evaluation of Dynamic-GAN was comprehensively conducted both quantitatively and qualitatively by testing it on benchmark datasets, and on a mobile robot in indoor navigation environments. As people appeared dynamically in close proximity to the robot, results showed that large, feature-rich occluded areas can be accurately reconstructed with our attention-based deep learning framework for dynamic object removal. Through experiments we demonstrate that our proposed algorithm has up to 25% better performance on average as compared to the standard benchmark algorithms. 
    more » « less