skip to main content


Search for: All records

Creators/Authors contains: "Wen, Yue"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Abstract

    Objective.High-density electromyography (HD-EMG) decomposition algorithms are used to identify individual motor unit (MU) spike trains, which collectively constitute the neural code of movements, to predict motor intent. This approach has advanced from offline to online decomposition, from isometric to dynamic contractions, leading to a wide range of neural-machine interface applications. However, current online methods need offline retraining when applied to the same muscle on a different day or to a different person, which limits their applications in a real-time neural-machine interface. We proposed a deep convolutional neural network (CNN) framework for neural drive estimation, which takes in frames of HD-EMG signals as input, extracts general spatiotemporal properties of MU action potentials, and outputs the number of spikes in each frame. The deep CNN can generalize its application without retraining to HD-EMG data recorded in separate sessions, muscles, or participants.Approach.We recorded HD-EMG signals from the vastus medialis and vastus lateralis muscles from five participants while they performed isometric contractions during two sessions separated by ∼20 months. We identified MU spike trains from HD-EMG signals using a convolutive blind source separation (BSS) method, and then used the cumulative spike train (CST) of these MUs and the HD-EMG signals to train and validate the deep CNN.Main results.On average, the correlation coefficients between CST from the BSS and that from deep CNN were0.983±0.006for leave-one-out across-sessions-and-muscles validation and0.989±0.002for leave-one-out across-participants validation. When trained with more than four datasets, the performance of deep CNN saturated at0.984±0.001for cross validations across muscles, sessions, and participants.Significance.We can conclude that the deep CNN is generalizable across the aforementioned conditions without retraining. We could potentially generate a robust deep CNN to estimate neural drive to muscles for neural-machine interfaces.

     
    more » « less
  2. Abstract Background

    Human-human (HH) interaction mediated by machines (e.g., robots or passive sensorized devices), which we call human-machine-human (HMH) interaction, has been studied with increasing interest in the last decade. The use of machines allows the implementation of different forms of audiovisual and/or physical interaction in dyadic tasks. HMH interaction between two partners can improve the dyad’s ability to accomplish a joint motor task (task performance) beyond either partner’s ability to perform the task solo. It can also be used to more efficiently train an individual to improve their solo task performance (individual motor learning). We review recent research on the impact of HMH interaction on task performance and individual motor learning in the context of motor control and rehabilitation, and we propose future research directions in this area.

    Methods

    A systematic search was performed on the Scopus, IEEE Xplore, and PubMed databases. The search query was designed to find studies that involve HMH interaction in motor control and rehabilitation settings. Studies that do not investigate the effect of changing the interaction conditions were filtered out. Thirty-one studies met our inclusion criteria and were used in the qualitative synthesis.

    Results

    Studies are analyzed based on their results related to the effects of interaction type (e.g., audiovisual communication and/or physical interaction), interaction mode (collaborative, cooperative, co-active, and competitive), and partner characteristics. Visuo-physical interaction generally results in better dyadic task performance than visual interaction alone. In cases where the physical interaction between humans is described by a spring, there are conflicting results as to the effect of the stiffness of the spring. In terms of partner characteristics, having a more skilled partner improves dyadic task performance more than having a less skilled partner. However, conflicting results were observed in terms of individual motor learning.

    Conclusions

    Although it is difficult to draw clear conclusions as to which interaction type, mode, or partner characteristic may lead to optimal task performance or individual motor learning, these results show the possibility for improved outcomes through HMH interaction. Future work that focuses on selecting the optimal personalized interaction conditions and exploring their impact on rehabilitation settings may facilitate the transition of HMH training protocols to clinical implementations.

     
    more » « less
  3. null (Ed.)
    Over the past few decades, there have been many studies of human-human physical interaction to better understand why humans physically interact so effectively and how dyads outperform individuals in certain motor tasks. Because of the different methodologies and experimental setups in these studies, however, it is difficult to draw general conclusions as to the reasons for this improved performance. In this study, we propose an open-source experimental framework for the systematic study of the effect of human-human interaction, as mediated by robots, at the ankle joint. We also propose a new framework to study various interactive behaviors (i.e., collaborative, cooperative, and competitive tasks) that can be emulated using a virtual spring connecting human pairs. To validate the proposed experimental framework, we perform a transparency analysis, which is closely related to haptic rendering performance. We compare muscle EMG and ankle motion data while subjects are barefoot, attached to the unpowered robot, and attached to the powered robot implementing transparency control. We also validate the performance in rendering a virtual springs covering a range of stiffness values (5-50 Nm/rad) while the subjects track several desired trajectories(sine waves at frequencies between 0.1 and 1.1 Hz). Finally, we study the performance of the system in human-human interaction under nine different interactive conditions. Finally, we demonstrate the feasibility of the system in studying human-human interaction under different interactive behaviors. 
    more » « less
  4. null (Ed.)
  5. null (Ed.)