Soccer kicking is a complex whole-body motion that requires intricate coordination of various motor actions. To accomplish such dynamic motion in a humanoid robot, the robot needs to simultaneously: 1) transfer high kinetic energy to the kicking leg, 2) maintain balance and stability of the entire body, and 3) manage the impact disturbance from the ball during the kicking moment. Prior studies on robotic soccer kicking often prioritized stability, leading to overly conservative quasistatic motions. In this work, we present a biomechanics-inspired control framework that leverages trajectory optimization and imitation learning to facilitate highly dynamic soccer kicks in humanoid robots. We conducted an in-depth analysis of human soccer kick biomechanics to identify key motion constraints. Based on this understanding, we designed kinodynamically feasible trajectories that are then used as a reference in imitation learning to develop a robust feedback control policy. We demonstrate the effectiveness of our approach through a simulation of an anthropomorphic 25 DoF bipedal humanoid robot, named PresToe, which is equipped with 7 DoF legs, including a unique actuated toe. Using our framework, PresToe can execute dynamic instep kicks, propelling the ball at speeds exceeding 11 m/s in full dynamics simulation. 
                        more » 
                        « less   
                    
                            
                            Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation
                        
                    
    
            Abstract Modeling human motor control and predicting how humans will move in novel environments is a grand scientific challenge. Researchers in the fields of biomechanics and motor control have proposed and evaluated motor control models via neuromechanical simulations, which produce physically correct motions of a musculoskeletal model. Typically, researchers have developed control models that encode physiologically plausible motor control hypotheses and compared the resulting simulation behaviors to measurable human motion data. While such plausible control models were able to simulate and explain many basic locomotion behaviors (e.g. walking, running, and climbing stairs), modeling higher layer controls (e.g. processing environment cues, planning long-term motion strategies, and coordinating basic motor skills to navigate in dynamic and complex environments) remains a challenge. Recent advances in deep reinforcement learning lay a foundation for modeling these complex control processes and controlling a diverse repertoire of human movement; however, reinforcement learning has been rarely applied in neuromechanical simulation to model human control. In this paper, we review the current state of neuromechanical simulations, along with the fundamentals of reinforcement learning, as it applies to human locomotion. We also present a scientific competition and accompanying software platform, which we have organized to accelerate the use of reinforcement learning in neuromechanical simulations. This “Learn to Move” competition was an official competition at the NeurIPS conference from 2017 to 2019 and attracted over 1300 teams from around the world. Top teams adapted state-of-the-art deep reinforcement learning techniques and produced motions, such as quick turning and walk-to-stand transitions, that have not been demonstrated before in neuromechanical simulations without utilizing reference motion data. We close with a discussion of future opportunities at the intersection of human movement simulation and reinforcement learning and our plans to extend the Learn to Move competition to further facilitate interdisciplinary collaboration in modeling human motor control for biomechanics and rehabilitation research 
        more » 
        « less   
        
    
                            - Award ID(s):
- 1734449
- PAR ID:
- 10298415
- Date Published:
- Journal Name:
- Journal of NeuroEngineering and Rehabilitation
- Volume:
- 18
- Issue:
- 1
- ISSN:
- 1743-0003
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            Baden, Tom (Ed.)Animals modulate sensory processing in concert with motor actions. Parallel copies of motor signals, called corollary discharge (CD), prepare the nervous system to process the mixture of externally and self-generated (reafferent) feedback that arises during locomotion. Commonly, CD in the peripheral nervous system cancels reafference to protect sensors and the central nervous system from being fatigued and overwhelmed by self-generated feedback. However, cancellation also limits the feedback that contributes to an animal’s awareness of its body position and motion within the environment, the sense of proprioception. We propose that, rather than cancellation, CD to the fish lateral line organ restructures reafference to maximize proprioceptive information content. Fishes’ undulatory body motions induce reafferent feedback that can encode the body’s instantaneous configuration with respect to fluid flows. We combined experimental and computational analyses of swimming biomechanics and hair cell physiology to develop a neuromechanical model of how fish can track peak body curvature, a key signature of axial undulatory locomotion. Without CD, this computation would be challenged by sensory adaptation, typified by decaying sensitivity and phase distortions with respect to an input stimulus. We find that CD interacts synergistically with sensor polarization to sharpen sensitivity along sensors’ preferred axes. The sharpening of sensitivity regulates spiking to a narrow interval coinciding with peak reafferent stimulation, which prevents adaptation and homogenizes the otherwise variable sensor output. Our integrative model reveals a vital role of CD for ensuring precise proprioceptive feedback during undulatory locomotion, which we term external proprioception.more » « less
- 
            Abstract Biological microswimmers can coordinate their motions to exploit their fluid environment—and each other—to achieve global advantages in their locomotory performance. These cooperative locomotion require delicate adjustments of both individual swimming gaits and spatial arrangements of the swimmers. Here we probe the emergence of such cooperative behaviors among artificial microswimmers endowed with artificial intelligence. We present the first use of a deep reinforcement learning approach to empower the cooperative locomotion of a pair of reconfigurable microswimmers. The AI-advised cooperative policy comprises two stages: an approach stage where the swimmers get in close proximity to fully exploit hydrodynamic interactions, followed a synchronization stage where the swimmers synchronize their locomotory gaits to maximize their overall net propulsion. The synchronized motions allow the swimmer pair to move together coherently with an enhanced locomotion performance unattainable by a single swimmer alone. Our work constitutes a first step toward uncovering intriguing cooperative behaviors of smart artificial microswimmers, demonstrating the vast potential of reinforcement learning towards intelligent autonomous manipulations of multiple microswimmers for their future biomedical and environmental applications.more » « less
- 
            null (Ed.)Sensory feedback during movement entails sensing a mix of externally- and self-generated stimuli (respectively, exafference and reafference). In many peripheral sensory systems, a parallel copy of the motor command, a corollary discharge, is thought to eliminate sensory feedback during behaviors. However, reafference has important roles in motor control, because it provides real-time feedback on the animal’s motions through the environment. In this case, the corollary discharge must be calibrated to enable feedback while avoiding negative consequences like sensor fatigue. The undulatory motions of fishes’ bodies generate induced flows that are sensed by the lateral line sensory organ, and prior work has shown these reafferent signals contribute to the regulation of swimming kinematics. Corollary discharge to the lateral line reduces the gain for reafference, but cannot eliminate it altogether. We develop a data-driven model integrating swimming biomechanics, hair cell physiology, and corollary discharge to understand how sensory modulation is calibrated during locomotion in larval zebrafish. In the absence of corollary discharge, lateral line afferent units exhibit the highly heterogeneous habituation rates characteristic of hair cell systems, typified by decaying sensitivity and phase distortions with respect to an input stimulus. Activation of the corollary discharge prevents habituation, reduces response heterogeneity, and regulates response phases in a narrow interval around the time of the peak stimulus. This suggests a synergistic interaction between the corollary discharge and the polarization of lateral line sensors, which sharpens sensitivity along their preferred axes. Our integrative model reveals a vital role of corollary discharge for ensuring precise feedback, including proprioception, during undulatory locomotion.more » « less
- 
            Language-guided human motion synthesis has been a challenging task due to the inherent complexity and diversity of human behaviors. Previous methods face limitations in generalization to novel actions, often resulting in unrealistic or incoherent motion sequences. In this paper, we propose ATOM (ATomic mOtion Modeling) to mitigate this problem, by decomposing actions into atomic actions, and employing a curriculum learning strategy to learn atomic action composition. First, we disentangle complex human motions into a set of atomic actions during learning, and then assemble novel actions using the learned atomic actions, which offers better adaptability to new actions. Moreover, we introduce a curriculum learning training strategy that leverages masked motion modeling with a gradual increase in the mask ratio, and thus facilitates atomic action assembly. This approach mitigates the overfitting problem commonly encountered in previous methods while enforcing the model to learn better motion representations. We demonstrate the effectiveness of ATOM through extensive experiments, including text-to-motion and action-to-motion synthesis tasks. We further illustrate its superiority in synthesizing plausible and coherent text-guided human motion sequences.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    