skip to main content


Title: Automated Movement Assessment in Stroke Rehabilitation
We are developing a system for long term Semi-Automated Rehabilitation At the Home (SARAH) that relies on low-cost and unobtrusive video-based sensing. We present a cyber-human methodology used by the SARAH system for automated assessment of upper extremity stroke rehabilitation at the home. We propose a hierarchical model for automatically segmenting stroke survivor's movements and generating training task performance assessment scores during rehabilitation. The hierarchical model fuses expert therapist knowledge-based approaches with data-driven techniques. The expert knowledge is more observable in the higher layers of the hierarchy (task and segment) and therefore more accessible to algorithms incorporating high level constraints relating to activity structure (i.e., type and order of segments per task). We utilize an HMM and a Decision Tree model to connect these high level priors to data driven analysis. The lower layers (RGB images and raw kinematics) need to be addressed primarily through data driven techniques. We use a transformer based architecture operating on low-level action features (tracking of individual body joints and objects) and a Multi-Stage Temporal Convolutional Network(MS-TCN) operating on raw RGB images. We develop a sequence combining these complimentary algorithms effectively, thus encoding the information from different layers of the movement hierarchy. Through this combination, we produce a robust segmentation and task assessment results on noisy, variable and limited data, which is characteristic of low cost video capture of rehabilitation at the home. Our proposed approach achieves 85% accuracy in per-frame labeling, 99% accuracy in segment classification and 93% accuracy in task completion assessment. Although the methodology proposed in this paper applies to upper extremity rehabilitation using the SARAH system, it can potentially be used, with minor alterations, to assist automation in many other movement rehabilitation contexts (i.e., lower extremity training for neurological accidents).  more » « less
Award ID(s):
2230762
PAR ID:
10351747
Author(s) / Creator(s):
; ; ; ; ; ;
Date Published:
Journal Name:
Frontiers in Neurology
Volume:
12
ISSN:
1664-2295
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Background

    The incidence of stroke and stroke-related hemiparesis has been steadily increasing and is projected to become a serious social, financial, and physical burden on the aging population. Limited access to outpatient rehabilitation for these stroke survivors further deepens the healthcare issue and estranges the stroke patient demographic in rural areas. However, new advances in motion detection deep learning enable the use of handheld smartphone cameras for body tracking, offering unparalleled levels of accessibility.

    Methods

    In this study we want to develop an automated method for evaluation of a shortened variant of the Fugl-Meyer assessment, the standard stroke rehabilitation scale describing upper extremity motor function. We pair this technology with a series of machine learning models, including different neural network structures and an eXtreme Gradient Boosting model, to score 16 of 33 (49%) Fugl-Meyer item activities.

    Results

    In this observational study, 45 acute stroke patients completed at least 1 recorded Fugl-Meyer assessment for the training of the auto-scorers, which yielded average accuracies ranging from 78.1% to 82.7% item-wise.

    Conclusion

    In this study, an automated method was developed for the evaluation of a shortened variant of the Fugl-Meyer assessment, the standard stroke rehabilitation scale describing upper extremity motor function. This novel method is demonstrated with potential to conduct telehealth rehabilitation evaluations and assessments with accuracy and availability.

     
    more » « less
  2. Chua Chin Heng, Matthew (Ed.)
    Stroke rehabilitation seeks to accelerate motor recovery by training functional activities, but may have minimal impact because of insufficient training doses. In animals, training hundreds of functional motions in the first weeks after stroke can substantially boost upper extremity recovery. The optimal quantity of functional motions to boost recovery in humans is currently unknown, however, because no practical tools exist to measure them during rehabilitation training. Here, we present PrimSeq, a pipeline to classify and count functional motions trained in stroke rehabilitation. Our approach integrates wearable sensors to capture upper-body motion, a deep learning model to predict motion sequences, and an algorithm to tally motions. The trained model accurately decomposes rehabilitation activities into elemental functional motions, outperforming competitive machine learning methods. PrimSeq furthermore quantifies these motions at a fraction of the time and labor costs of human experts. We demonstrate the capabilities of PrimSeq in previously unseen stroke patients with a range of upper extremity motor impairment. We expect that our methodological advances will support the rigorous measurement required for quantitative dosing trials in stroke rehabilitation. 
    more » « less
  3. The ability to estimate 3D human body pose and movement, also known as human pose estimation (HPE), enables many applications for home-based health monitoring, such as remote rehabilitation training. Several possible solutions have emerged using sensors ranging from RGB cameras, depth sensors, millimeter-Wave (mmWave) radars, and wearable inertial sensors. Despite previous efforts on datasets and benchmarks for HPE, few dataset exploits multiple modalities and focuses on home-based health monitoring. To bridge this gap, we present mRI1, a multi-modal 3D human pose estimation dataset with mmWave, RGB-D, and Inertial Sensors. Our dataset consists of over 160k synchronized frames from 20 subjects performing rehabilitation exercises and supports the benchmarks of HPE and action detection. We perform extensive experiments using our dataset and delineate the strength of each modality. We hope that the release of mRI can catalyze the research in pose estimation, multi-modal learning, and action understanding, and more importantly facilitate the applications of home-based health monitoring. 
    more » « less
  4. Background

    Upper limb proprioceptive impairments are common after stroke and affect daily function. Recent work has shown that stroke survivors have difficulty using visual information to improve proprioception. It is unclear how eye movements are impacted to guide action of the arm after stroke. Here, we aimed to understand how upper limb proprioceptive impairments impact eye movements in individuals with stroke.

    Methods

    Control (N = 20) and stroke participants (N = 20) performed a proprioceptive matching task with upper limb and eye movements. A KINARM exoskeleton with eye tracking was used to assess limb and eye kinematics. The upper limb was passively moved by the robot and participants matched the location with either an arm or eye movement. Accuracy was measured as the difference between passive robot movement location and active limb matching (Hand-End Point Error) or active eye movement matching (Eye-End Point Error).

    Results

    We found that individuals with stroke had significantly larger Hand (2.1×) and Eye-End Point (1.5×) Errors compared to controls. Further, we found that proprioceptive errors of the hand and eye were highly correlated in stroke participants ( r = .67, P = .001), a relationship not observed for controls.

    Conclusions

    Eye movement accuracy declined as a function of proprioceptive impairment of the more-affected limb, which was used as a proprioceptive reference. The inability to use proprioceptive information of the arm to coordinate eye movements suggests that disordered proprioception impacts integration of sensory information across different modalities. These results have important implications for how vision is used to actively guide limb movement during rehabilitation.

     
    more » « less
  5. Semantic segmentation methods are typically designed for RGB color images, which are interpolated from raw Bayer images. While RGB images provide abundant color information and are easily understood by humans, they also add extra storage and computational burden for neural networks. On the other hand, raw Bayer images preserve primitive color information with a single channel, potentially increasing segmentation accuracy while significantly decreasing storage and computation time. In this paper, we propose RawSeg-Net to segment single-channel raw Bayer images directly. Different from RGB images that already contain neighboring context information during ISP color interpolation, each pixel in raw Bayer images does not contain any context clues. Based on Bayer pattern properties, RawSeg-Net assigns dynamic attention on Bayer images' spectral frequency and spatial locations to mitigate classification confusion, and proposes a re-sampling strategy to capture both global and local contextual information. 
    more » « less