skip to main content


Title: Continuous Classification of Locomotion in Response to Task Complexity and Anticipatory State
Objective Intent recognition in lower-extremity assistive devices (e.g., prostheses and exoskeletons) is typically limited to either recognition of steady-state locomotion or changes of terrain (e.g., level ground to stair) occurring in a straight-line path and under anticipated condition. Stability is highly affected during non-steady changes of direction such as cuts especially when they are unanticipated, posing high risk of fall-related injuries. Here, we studied the influence of changes of direction and user anticipation on task recognition, and accordingly introduced classification schemes accommodating such effects. Methods A linear discriminant analysis (LDA) classifier continuously classified straight-line walking, sidestep/crossover cuts (single transitions), and cuts-to-stair locomotion (mixed transitions) performed under varied task anticipatory conditions. Training paradigms with varying levels of anticipated/unanticipated exposures and analysis windows of size 100–600 ms were examined. Results More accurate classification of anticipated relative to unanticipated tasks was observed. Including bouts of target task in the training data was necessary to improve generalization to unanticipated locomotion. Only up to two bouts of target task were sufficient to reduce errors to <20% in unanticipated mixed transitions, whereas, in single transitions and straight walking, substantial unanticipated information (i.e., five bouts) was necessary to achieve similar outcomes. Window size modifications did not have a significant influence on classification performance. Conclusion Adjusting the training paradigm helps to achieve classification schemes capable of adapting to changes of direction and task anticipatory state. Significance The findings could provide insight into developing classification schemes that can adapt to changes of direction and user anticipation. They could inform intent recognition strategies for controlling lower-limb assistive to robustly handle “unknown” circumstances, and thus deliver increased level of reliability and safety.  more » « less
Award ID(s):
2054343
NSF-PAR ID:
10310441
Author(s) / Creator(s):
;
Date Published:
Journal Name:
Frontiers in Bioengineering and Biotechnology
Volume:
9
ISSN:
2296-4185
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Intent recognition in lower-limb assistive devices typically relies on neuromechanical sensing of an affected limb acquired through embedded device sensors. It remains unknown whether signals from more widespread sources such as the contralateral leg and torso positively influence intent recognition, and how specific locomotor tasks that place high demands on the neuromuscular system, such as changes of direction, contribute to intent recognition. In this study, we evaluated the performances of signals from varying mechanical modalities (accelerographic, gyroscopic, and joint angles) and locations (the trailing leg, leading leg and torso) during straight walking, changes of direction (cuts), and cuts to stair ascent with varying task anticipation. Biomechanical information from the torso demonstrated poor performance across all conditions. Unilateral (the trailing or leading leg) joint angle data provided the highest accuracy. Surprisingly, neither the fusion of unilateral and torso data nor the combination of multiple signal modalities improved recognition. For these fused modality data, similar trends but with diminished accuracy rates were reported during unanticipated conditions. Finally, for datasets that achieved a relatively accurate (≥90%) recognition of unanticipated tasks, these levels of recognition were achieved after the mid-swing of the trailing/transitioning leg, prior to a subsequent heel strike. These findings suggest that mechanical sensing of the legs and torso for the recognition of straight-line and transient locomotion can be implemented in a relatively flexible manner (i.e., signal modality, and from the leading or trailing legs) and, importantly, suggest that more widespread sensing is not always optimal. 
    more » « less
  2. Fundamental knowledge in activity recognition of individuals with motor disorders such as Parkinson’s disease (PD) has been primarily limited to detection of steady-state/static tasks (e.g., sitting, standing, walking). To date, identification of non-steady-state locomotion on uneven terrains (stairs, ramps) has not received much attention. Furthermore, previous research has mainly relied on data from a large number of body locations which could adversely affect user convenience and system performance. Here, individuals with mild stages of PD and healthy subjects performed non-steady-state circuit trials comprising stairs, ramp, and changes of direction. An offline analysis using a linear discriminant analysis (LDA) classifier and a Long-Short Term Memory (LSTM) neural network was performed for task recognition. The performance of accelerographic and gyroscopic information from varied lower/upper-body segments were tested across a set of user-independent and user-dependent training paradigms. Comparing the F1 score of a given signal across classifiers showed improved performance using LSTM compared to LDA. Using LSTM, even a subset of information (e.g., feet data) in subject-independent training appeared to provide F1 score > 0.8. However, employing LDA was shown to be at the expense of being limited to using a subject-dependent training and/or biomechanical data from multiple body locations. The findings could inform a number of applications in the field of healthcare monitoring and developing advanced lower-limb assistive devices by providing insights into classification schemes capable of handling non-steady-state and unstructured locomotion in individuals with mild Parkinson’s disease.

     
    more » « less
  3. Research on robotic lower-limb assistive devices over the past decade has generated autonomous, multiple degree-of-freedom devices to augment human performance during a variety of scenarios. However, the increase in capabilities of these devices is met with an increase in the complexity of the overall control problem and requirement for an accurate and robust sensing modality for intent recognition. Due to its ability to precede changes in motion, surface electromyography (EMG) is widely studied as a peripheral sensing modality for capturing features of muscle activity as an input for control of powered assistive devices. In order to capture features that contribute to muscle contraction and joint motion beyond muscle activity of superficial muscles, researchers have introduced sonomyography, or real-time dynamic ultrasound imaging of skeletal muscle. However, the ability of these sonomyography features to continuously predict multiple lower-limb joint kinematics during widely varying ambulation tasks, and their potential as an input for powered multiple degree-of-freedom lower-limb assistive devices is unknown. The objective of this research is to evaluate surface EMG and sonomyography, as well as the fusion of features from both sensing modalities, as inputs to Gaussian process regression models for the continuous estimation of hip, knee and ankle angle and velocity during level walking, stair ascent/descent and ramp ascent/descent ambulation. Gaussian process regression is a Bayesian nonlinear regression model that has been introduced as an alternative to musculoskeletal model-based techniques. In this study, time-intensity features of sonomyography on both the anterior and posterior thigh along with time-domain features of surface EMG from eight muscles on the lower-limb were used to train and test subject-dependent and task-invariant Gaussian process regression models for the continuous estimation of hip, knee and ankle motion. Overall, anterior sonomyography sensor fusion with surface EMG significantly improved estimation of hip, knee and ankle motion for all ambulation tasks (level ground, stair and ramp ambulation) in comparison to surface EMG alone. Additionally, anterior sonomyography alone significantly improved errors at the hip and knee for most tasks compared to surface EMG. These findings help inform the implementation and integration of volitional control strategies for robotic assistive technologies.

     
    more » « less
  4. Abstract

    Human ambulation is typically characterized during steady-state isolated tasks (e.g., walking, running, stair ambulation). However, general human locomotion comprises continuous adaptation to the varied terrains encountered during activities of daily life. To fill an important gap in knowledge that may lead to improved therapeutic and device interventions for mobility-impaired individuals, it is vital to identify how the mechanics of individuals change as they transition between different ambulatory tasks, and as they encounter terrains of differing severity. In this work, we study lower-limb joint kinematics during the transitions between level walking and stair ascent and descent over a range of stair inclination angles. Using statistical parametric mapping, we identify where and when the kinematics of transitions are unique from the adjacent steady-state tasks. Results show unique transition kinematics primarily in the swing phase, which are sensitive to stair inclination. We also train Gaussian process regression models for each joint to predict joint angles given the gait phase, stair inclination, and ambulation context (transition type, ascent/descent), demonstrating a mathematical modeling approach that successfully incorporates terrain transitions and severity. The results of this work further our understanding of transitory human biomechanics and motivate the incorporation of transition-specific control models into mobility-assistive technology.

     
    more » « less
  5. null (Ed.)
    An important problem in designing human-robot systems is the integration of human intent and performance in the robotic control loop, especially in complex tasks. Bimanual coordination is a complex human behavior that is critical in many fine motor tasks, including robot-assisted surgery. To fully leverage the capabilities of the robot as an intelligent and assistive agent, online recognition of bimanual coordination could be important. Robotic assistance for a suturing task, for example, will be fundamentally different during phases when the suture is wrapped around the instrument (i.e., making a c- loop), than when the ends of the suture are pulled apart. In this study, we develop an online recognition method of bimanual coordination modes (i.e., the directions and symmetries of right and left hand movements) using geometric descriptors of hand motion. We (1) develop this framework based on ideal trajectories obtained during virtual 2D bimanual path following tasks performed by human subjects operating Geomagic Touch haptic devices, (2) test the offline recognition accuracy of bi- manual direction and symmetry from human subject movement trials, and (3) evalaute how the framework can be used to characterize 3D trajectories of the da Vinci Surgical System’s surgeon-side manipulators during bimanual surgical training tasks. In the human subject trials, our geometric bimanual movement classification accuracy was 92.3% for movement direction (i.e., hands moving together, parallel, or away) and 86.0% for symmetry (e.g., mirror or point symmetry). We also show that this approach can be used for online classification of different bimanual coordination modes during needle transfer, making a C loop, and suture pulling gestures on the da Vinci system, with results matching the expected modes. Finally, we discuss how these online estimates are sensitive to task environment factors and surgeon expertise, and thus inspire future work that could leverage adaptive control strategies to enhance user skill during robot-assisted surgery. 
    more » « less