skip to main content

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 11:00 PM ET on Thursday, January 16 until 2:00 AM ET on Friday, January 17 due to maintenance. We apologize for the inconvenience.


Title: Development of a Wearable Ultrasound Transducer for Sensing Muscle Activities in Assistive Robotics Applications
Robotic prostheses and powered exoskeletons are novel assistive robotic devices for modern medicine. Muscle activity sensing plays an important role in controlling assistive robotics devices. Most devices measure the surface electromyography (sEMG) signal for myoelectric control. However, sEMG is an integrated signal from muscle activities. It is difficult to sense muscle movements in specific small regions, particularly at different depths. Alternatively, traditional ultrasound imaging has recently been proposed to monitor muscle activity due to its ability to directly visualize superficial and at-depth muscles. Despite their advantages, traditional ultrasound probes lack wearability. In this paper, a wearable ultrasound (US) transducer, based on lead zirconate titanate (PZT) and a polyimide substrate, was developed for a muscle activity sensing demonstration. The fabricated PZT-5A elements were arranged into a 4 × 4 array and then packaged in polydimethylsiloxane (PDMS). In vitro porcine tissue experiments were carried out by generating the muscle activities artificially, and the muscle movements were detected by the proposed wearable US transducer via muscle movement imaging. Experimental results showed that all 16 elements had very similar acoustic behaviors: the averaged central frequency, −6 dB bandwidth, and electrical impedance in water were 10.59 MHz, 37.69%, and 78.41 Ω, respectively. The in vitro study successfully demonstrated the capability of monitoring local muscle activity using the prototyped wearable transducer. The findings indicate that ultrasonic sensing may be an alternative to standardize myoelectric control for assistive robotics applications.  more » « less
Award ID(s):
2124017
PAR ID:
10427094
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Biosensors
Volume:
13
Issue:
1
ISSN:
2079-6374
Page Range / eLocation ID:
134
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    There have been significant advances in biosignal extraction techniques to drive external biomechatronic devices or to use as inputs to sophisticated human machine interfaces. The control signals are typically derived from biological signals such as myoelectric measurements made either from the surface of the skin or subcutaneously. Other biosignal sensing modalities are emerging. With improvements in sensing modalities and control algorithms, it is becoming possible to robustly control the target position of an end-effector. It remains largely unknown to what extent these improvements can lead to naturalistic human-like movement. In this paper, we sought to answer this question. We utilized a sensing paradigm called sonomyography based on continuous ultrasound imaging of forearm muscles. Unlike myoelectric control strategies which measure electrical activation and use the extracted signals to determine the velocity of an end-effector; sonomyography measures muscle deformation directly with ultrasound and uses the extracted signals to proportionally control the position of an end-effector. Previously, we showed that users were able to accurately and precisely perform a virtual target acquisition task using sonomyography. In this work, we investigate the time course of the control trajectories derived from sonomyography. We show that the time course of the sonomyography-derived trajectories that users take to reach virtual targets reflect the trajectories shown to be typical for kinematic characteristics observed in biological limbs. Specifically, during a target acquisition task, the velocity profiles followed a minimum jerk trajectory shown for point-to-point arm reaching movements, with similar time to target. In addition, the trajectories based on ultrasound imaging result in a systematic delay and scaling of peak movement velocity as the movement distance increased. We believe this is the first evaluation of similarities in control policies in coordinated movements in jointed limbs, and those based on position control signals extracted at the individual muscle level. These results have strong implications for the future development of control paradigms for assistive technologies.

     
    more » « less
  2. Abstract Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging’s region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ( $ p $  < .001) and increased the prediction coefficient of determination by 20.13% ( $ p $  < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy. 
    more » « less
  3. null (Ed.)
    Clinical translation of “intelligent” lower-limb assistive technologies relies on robust control interfaces capable of accurately detecting user intent. To date, mechanical sensors and surface electromyography (EMG) have been the primary sensing modalities used to classify ambulation. Ultrasound (US) imaging can be used to detect user-intent by characterizing structural changes of muscle. Our study evaluates wearable US imaging as a new sensing modality for continuous classification of five discrete ambulation modes: level, incline, decline, stair ascent, and stair descent ambulation, and benchmarks performance relative to EMG sensing. Ten able-bodied subjects were equipped with a wearable US scanner and eight unilateral EMG sensors. Time-intensity features were recorded from US images of three thigh muscles. Features from sliding windows of EMG signals were analyzed in two configurations: one including 5 EMG sensors on muscles around the thigh, and another with 3 additional sensors placed on the shank. Linear discriminate analysis was implemented to continuously classify these phase-dependent features of each sensing modality as one of five ambulation modes. US-based sensing statistically improved mean classification accuracy to 99.8% (99.5-100% CI) compared to 8-EMG sensors (85.8%; 84.0-87.6% CI) and 5-EMG sensors (75.3%; 74.5-76.1% CI). Further, separability analyses show the importance of superficial and deep US information for stair classification relative to other modes. These results are the first to demonstrate the ability of US-based sensing to classify discrete ambulation modes, highlighting the potential for improved assistive device control using less widespread, less superficial and higher resolution sensing of skeletal muscle. 
    more » « less
  4. Abstract Background

    Improving the prediction ability of a human-machine interface (HMI) is critical to accomplish a bio-inspired or model-based control strategy for rehabilitation interventions, which are of increased interest to assist limb function post neurological injuries. A fundamental role of the HMI is to accurately predict human intent by mapping signals from a mechanical sensor or surface electromyography (sEMG) sensor. These sensors are limited to measuring the resulting limb force or movement or the neural signal evoking the force. As the intermediate mapping in the HMI also depends on muscle contractility, a motivation exists to include architectural features of the muscle as surrogates of dynamic muscle movement, thus further improving the HMI’s prediction accuracy.

    Objective

    The purpose of this study is to investigate a non-invasive sEMG and ultrasound (US) imaging-driven Hill-type neuromuscular model (HNM) for net ankle joint plantarflexion moment prediction. We hypothesize that the fusion of signals from sEMG and US imaging results in a more accurate net plantarflexion moment prediction than sole sEMG or US imaging.

    Methods

    Ten young non-disabled participants walked on a treadmill at speeds of 0.50, 0.75, 1.00, 1.25, and 1.50 m/s. The proposed HNM consists of two muscle-tendon units. The muscle activation for each unit was calculated as a weighted summation of the normalized sEMG signal and normalized muscle thickness signal from US imaging. The HNM calibration was performed under both single-speed mode and inter-speed mode, and then the calibrated HNM was validated across all walking speeds.

    Results

    On average, the normalized moment prediction root mean square error was reduced by 14.58 % ($$p=0.012$$p=0.012) and 36.79 % ($$p<0.001$$p<0.001) with the proposed HNM when compared to sEMG-driven and US imaging-driven HNMs, respectively. Also, the calibrated models with data from the inter-speed mode were more robust than those from single-speed modes for the moment prediction.

    Conclusions

    The proposed sEMG-US imaging-driven HNM can significantly improve the net plantarflexion moment prediction accuracy across multiple walking speeds. The findings imply that the proposed HNM can be potentially used in bio-inspired control strategies for rehabilitative devices due to its superior prediction.

     
    more » « less
  5. null (Ed.)
    This paper investigates an ultrasound (US) imaging-based methodology to assess the contraction levels of plantar flexors quantitatively. Echogenicity derived from US imaging at different anatomical depths, including both lateral gastrocnemius (LGS) and soleus (SOL) muscles, is used for the prediction of the volitional isometric plantar flexion moment. Synchronous measurements, including a plantar flexion torque signal, a surface electromyography (sEMG) signal, and US imaging of both LGS and SOL muscles, are collected. Four feature sets, including sole sEMG, sole LGS echogenicity, sole SOL echogenicity, and their fusion, are used to train a Gaussian process regression (GPR) model and predict plantar flexion torque. The experimental results on four non-disabled participants show that the torque prediction accuracy is improved significantly by using the LGS or SOL echogenicity signal than using the sEMG signal. However, there is no significant improvement by using the fused feature compared to sole LGS or SOL echogenicity. The findings imply that using US imaging-derived signals improves the accuracy of predicting volitional effort on human plantar flexors. Potentially, US imaging can be used as a new sensing modality to measure or predict human lower limb motion intent in clinical rehabilitation devices. 
    more » « less