While recent advancements in motor learning have emphasized the critical role of systematic task scheduling in enhancing task learning, the heuristic design of task schedules remains predominant. Random task scheduling can lead to sub-optimal motor learning, whereas performance-based scheduling might not be adequate for complex motor skill acquisition. This paper addresses these challenges by proposing a model-based approach for online skill estimation and individualized task scheduling in de-novo (novel) motor learning tasks. We introduce a framework utilizing a personalized human motor learning model and particle filter for skill state estimation, coupled with a stochastic nonlinear model predictive control (SNMPC) strategy to optimize curriculum design for a high-dimensional motor task. Simulation results show the effectiveness of our framework in estimating the latent skill state, and the efficacy of the framework in accelerating skill learning. Furthermore, a human subject study shows that the group with the SNMPC-based curriculum design exhibited expedited skill learning and improved task performance. Our contributions offer a pathway towards expedited motor learning across various novel tasks, with implications for enhancing rehabilitation and skill acquisition processes. 
                        more » 
                        « less   
                    
                            
                            Human-machine-human interaction in motor control and rehabilitation: a review
                        
                    
    
            Abstract BackgroundHuman-human (HH) interaction mediated by machines (e.g., robots or passive sensorized devices), which we call human-machine-human (HMH) interaction, has been studied with increasing interest in the last decade. The use of machines allows the implementation of different forms of audiovisual and/or physical interaction in dyadic tasks. HMH interaction between two partners can improve the dyad’s ability to accomplish a joint motor task (task performance) beyond either partner’s ability to perform the task solo. It can also be used to more efficiently train an individual to improve their solo task performance (individual motor learning). We review recent research on the impact of HMH interaction on task performance and individual motor learning in the context of motor control and rehabilitation, and we propose future research directions in this area. MethodsA systematic search was performed on the Scopus, IEEE Xplore, and PubMed databases. The search query was designed to find studies that involve HMH interaction in motor control and rehabilitation settings. Studies that do not investigate the effect of changing the interaction conditions were filtered out. Thirty-one studies met our inclusion criteria and were used in the qualitative synthesis. ResultsStudies are analyzed based on their results related to the effects of interaction type (e.g., audiovisual communication and/or physical interaction), interaction mode (collaborative, cooperative, co-active, and competitive), and partner characteristics. Visuo-physical interaction generally results in better dyadic task performance than visual interaction alone. In cases where the physical interaction between humans is described by a spring, there are conflicting results as to the effect of the stiffness of the spring. In terms of partner characteristics, having a more skilled partner improves dyadic task performance more than having a less skilled partner. However, conflicting results were observed in terms of individual motor learning. ConclusionsAlthough it is difficult to draw clear conclusions as to which interaction type, mode, or partner characteristic may lead to optimal task performance or individual motor learning, these results show the possibility for improved outcomes through HMH interaction. Future work that focuses on selecting the optimal personalized interaction conditions and exploring their impact on rehabilitation settings may facilitate the transition of HMH training protocols to clinical implementations. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2024488
- PAR ID:
- 10361188
- Publisher / Repository:
- Springer Science + Business Media
- Date Published:
- Journal Name:
- Journal of NeuroEngineering and Rehabilitation
- Volume:
- 18
- Issue:
- 1
- ISSN:
- 1743-0003
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Abstract Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70–150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both “forward” (motor-to-sensory) and “inverse” directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in “learning” to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.more » « less
- 
            ObjectiveTo define static, dynamic, and cognitive fit and their interactions as they pertain to exosystems and to document open research needs in using these fit characteristics to inform exosystem design. BackgroundInitial exosystem sizing and fit evaluations are currently based on scalar anthropometric dimensions and subjective assessments. As fit depends on ongoing interactions related to task setting and user, attempts to tailor equipment have limitations when optimizing for this limited fit definition. MethodA targeted literature review was conducted to inform a conceptual framework defining three characteristics of exosystem fit: static, dynamic, and cognitive. Details are provided on the importance of differentiating fit characteristics for developing exosystems. ResultsStatic fit considers alignment between human and equipment and requires understanding anthropometric characteristics of target users and geometric equipment features. Dynamic fit assesses how the human and equipment move and interact with each other, with a focus on the relative alignment between the two systems. Cognitive fit considers the stages of human-information processing, including somatosensation, executive function, and motor selection. Human cognitive capabilities should remain available to process task- and stimulus-related information in the presence of an exosystem. Dynamic and cognitive fit are operationalized in a task-specific manner, while static fit can be considered for predefined postures. ConclusionA deeper understanding of how an exosystem fits an individual is needed to ensure good human–system performance. Development of methods for evaluating different fit characteristics is necessary. ApplicationMethods are presented to inform exosystem evaluation across physical and cognitive characteristics.more » « less
- 
            Principles from human-human physical interaction may be necessary to design more intuitive and seamless robotic devices to aid human movement. Previous studies have shown that light touch can aid balance and that haptic communication can improve performance of physical tasks, but the effects of touch between two humans on walking balance has not been previously characterized. This study examines physical interaction between two persons when one person aids another in performing a beam-walking task. 12 pairs of healthy young adults held a force sensor with one hand while one person walked on a narrow balance beam (2 cm wide x 3.7 m long) and the other person walked overground by their side. We compare balance performance during partnered vs. solo beam-walking to examine the effects of haptic interaction, and we compare hand interaction mechanics during partnered beam-walking vs. overground walking to examine how the interaction aided balance. While holding the hand of a partner, participants were able to walk further on the beam without falling, reduce lateral sway, and decrease angular momentum in the frontal plane. We measured small hand force magnitudes (mean of 2.2 N laterally and 3.4 N vertically) that created opposing torque components about the beam axis and calculated the interaction torque, the overlapping opposing torque that does not contribute to motion of the beam-walker’s body. We found higher interaction torque magnitudes during partnered beam-walking vs . partnered overground walking, and correlation between interaction torque magnitude and reductions in lateral sway. To gain insight into feasible controller designs to emulate human-human physical interactions for aiding walking balance, we modeled the relationship between each torque component and motion of the beam-walker’s body as a mass-spring-damper system. Our model results show opposite types of mechanical elements (active vs . passive) for the two torque components. Our results demonstrate that hand interactions aid balance during partnered beam-walking by creating opposing torques that primarily serve haptic communication, and our model of the torques suggest control parameters for implementing human-human balance aid in human-robot interactions.more » « less
- 
            Abstract Human–exoskeleton interactions have the potential to bring about changes in human behavior for physical rehabilitation or skill augmentation. Despite significant advances in the design and control of these robots, their application to human training remains limited. The key obstacles to the design of such training paradigms are the prediction of human–exoskeleton interaction effects and the selection of interaction control to affect human behavior. In this article, we present a method to elucidate behavioral changes in the human–exoskeleton system and identify expert behaviors correlated with a task goal. Specifically, we observe the joint coordinations of the robot, also referred to as kinematic coordination behaviors, that emerge from human–exoskeleton interaction during learning. We demonstrate the use of kinematic coordination behaviors with two task domains through a set of three human-subject studies. We find that participants (1) learn novel tasks within the exoskeleton environment, (2) demonstrate similarity of coordination during successful movements within participants, (3) learn to leverage these coordination behaviors to maximize success within participants, and (4) tend to converge to similar coordinations for a given task strategy across participants. At a high level, we identify task-specific joint coordinations that are used by different experts for a given task goal. These coordinations can be quantified by observing experts and the similarity to these coordinations can act as a measure of learning over the course of training for novices. The observed expert coordinations may further be used in the design of adaptive robot interactions aimed at teaching a participant the expert behaviors.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
