Sensory feedback is critical in fine motor control, learning, and adaptation. However, robotic prosthetic limbs currently lack the feedback segment of the communication loop between user and device. Sensory substitution feedback can close this gap, but sometimes this improvement only persists when users cannot see their prosthesis, suggesting the provided feedback is redundant with vision. Thus, given the choice, users rely on vision over artificial feedback. To effectively augment vision, sensory feedback must provide information that vision cannot provide or provides poorly. Although vision is known to be less precise at estimating speed than position, no work has compared speed precision of biomimetic arm movements. In this study, we investigated the uncertainty of visual speed estimates as defined by different virtual arm movements. We found that uncertainty was greatest for visual estimates of joint speeds, compared to absolute rotational or linear endpoint speeds. Furthermore, this uncertainty increased when the joint reference frame speed varied over time, potentially caused by an overestimation of joint speed. Finally, we demonstrate a joint-based sensory substitution feedback paradigm capable of significantly reducing joint speed uncertainty when paired with vision. Ultimately, this work may lead to improved prosthesis control and capacity for motor learning.
Myoelectric prostheses are a popular choice for restoring motor capability following the loss of a limb, but they do not provide direct feedback to the user about the movements of the device—in other words, kinesthesia. The outcomes of studies providing artificial sensory feedback are often influenced by the availability of incidental feedback. When subjects are blindfolded and disconnected from the prosthesis, artificial sensory feedback consistently improves control; however, when subjects wear a prosthesis and can see the task, benefits often deteriorate or become inconsistent. We theorize that providing artificial sensory feedback about prosthesis speed, which cannot be precisely estimated via vision, will improve the learning and control of a myoelectric prosthesis.
In this study, we test a joint-speed feedback system with six transradial amputee subjects to evaluate how it affects myoelectric control and adaptation behavior during a virtual reaching task.
Our results showed that joint-speed feedback lowered reaching errors and compensatory movements during steady-state reaches. However, the same feedback provided no improvement when control was perturbed.
These outcomes suggest that the benefit of joint speed feedback may be dependent on the complexity of the myoelectric control and the context of the task.
- PAR ID:
- 10391966
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Journal of NeuroEngineering and Rehabilitation
- Volume:
- 20
- Issue:
- 1
- ISSN:
- 1743-0003
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
Abstract -
Abstract Clinical myoelectric prostheses lack the sensory feedback and sufficient dexterity required to complete activities of daily living efficiently and accurately. Providing haptic feedback of relevant environmental cues to the user or imbuing the prosthesis with autonomous control authority have been separately shown to improve prosthesis utility. Few studies, however, have investigated the effect of combining these two approaches in a shared control paradigm, and none have evaluated such an approach from the perspective of neural efficiency (the relationship between task performance and mental effort measured directly from the brain). In this work, we analyzed the neural efficiency of 30 non-amputee participants in a grasp-and-lift task of a brittle object. Here, a myoelectric prosthesis featuring vibrotactile feedback of grip force and autonomous control of grasping was compared with a standard myoelectric prosthesis with and without vibrotactile feedback. As a measure of mental effort, we captured the prefrontal cortex activity changes using functional near infrared spectroscopy during the experiment. It was expected that the prosthesis with haptic shared control would improve both task performance and mental effort compared to the standard prosthesis. Results showed that only the haptic shared control system enabled users to achieve high neural efficiency, and that vibrotactile feedback was important for grasping with the appropriate grip force. These results indicate that the haptic shared control system synergistically combines the benefits of haptic feedback and autonomous controllers, and is well-poised to inform such hybrid advancements in myoelectric prosthesis technology.
-
Abstract Existing models of human walking use low-level reflexes or neural oscillators to generate movement. While appropriate to generate the stable, rhythmic movement patterns of steady-state walking, these models lack the ability to change their movement patterns or spontaneously generate new movements in the specific, goal-directed way characteristic of voluntary movements. Here we present a neuromuscular model of human locomotion that bridges this gap and combines the ability to execute goal directed movements with the generation of stable, rhythmic movement patterns that are required for robust locomotion. The model represents goals for voluntary movements of the swing leg on the task level of swing leg joint kinematics. Smooth movements plans towards the goal configuration are generated on the task level and transformed into descending motor commands that execute the planned movements, using internal models. The movement goals and plans are updated in real time based on sensory feedback and task constraints. On the spinal level, the descending commands during the swing phase are integrated with a generic stretch reflex for each muscle. Stance leg control solely relies on dedicated spinal reflex pathways. Spinal reflexes stimulate Hill-type muscles that actuate a biomechanical model with eight internal joints and six free-body degrees of freedom. The model is able to generate voluntary, goal-directed reaching movements with the swing leg and combine multiple movements in a rhythmic sequence. During walking, the swing leg is moved in a goal-directed manner to a target that is updated in real-time based on sensory feedback to maintain upright balance, while the stance leg is stabilized by low-level reflexes and a behavioral organization switching between swing and stance control for each leg. With this combination of reflex-based stance leg and voluntary, goal-directed control of the swing leg, the model controller generates rhythmic, stable walking patterns in which the swing leg movement can be flexibly updated in real-time to step over or around obstacles.
-
null (Ed.)It is thought that the brain does not simply react to sensory feedback, but rather uses an internal model of the body to predict the consequences of motor commands before sensory feedback arrives. Time-delayed sensory feedback can then be used to correct for the unexpected—perturbations, motor noise, or a moving target. The cerebellum has been implicated in this predictive control process. Here, we show that the feedback gain in patients with cerebellar ataxia matches that of healthy subjects, but that patients exhibit substantially more phase lag. This difference is captured by a computational model incorporating a Smith predictor in healthy subjects that is missing in patients, supporting the predictive role of the cerebellum in feedback control. Lastly, we improve cerebellar patients’ movement control by altering (phase advancing) the visual feedback they receive from their own self movement in a simplified virtual reality setup.more » « less
-
Abstract Humans are adept at a wide variety of motor skills, including the handling of complex objects and using tools. Advances to understand the control of voluntary goal-directed movements have focused on simple behaviors such as reaching, uncoupled to any additional object dynamics. Under these simplified conditions, basic elements of motor control, such as the roles of body mechanics, objective functions, and sensory feedback, have been characterized. However, these elements have mostly been examined in isolation, and the interactions between these elements have received less attention. This study examined a task with internal dynamics, inspired by the daily skill of transporting a cup of coffee, with additional expected or unexpected perturbations to probe the structure of the controller. Using optimal feedback control (OFC) as the basis, it proved necessary to endow the model of the body with mechanical impedance to generate the kinematic features observed in the human experimental data. The addition of mechanical impedance revealed that simulated movements were no longer sensitively dependent on the objective function, a highly debated cornerstone of optimal control. Further, feedforward replay of the control inputs was similarly successful in coping with perturbations as when feedback, or sensory information, was included. These findings suggest that when the control model incorporates a representation of the mechanical properties of the limb, that is, embodies its dynamics, the specific objective function and sensory feedback become less critical, and complex interactions with dynamic objects can be successfully managed.more » « less