skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: Learning Optimized Human Motion via Phase Space Analysis
This paper proposes a dynamic system based learning from demonstration approach to teach a robot activities of daily living. The approach takes inspiration from human movement literature to formulate trajectory learning as an optimal control problem.We assume a weighted combination of basis objective functions is the true objective function for a demonstrated motion. We derive basis objective functions analogous to those in human movement literature to optimize the robot’s motion. This method aims to naturally adapt the learned motion in different situations. To validate our approach, we learn motions from two categories: 1) commonly prescribed therapeutic exercises and 2) tea making. We show the reproduction accuracy of our method and compare torque requirements to the dynamic motion primitive for each motion, with and without an added load.  more » « less
Award ID(s):
1830597
PAR ID:
10227429
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the IEEERSJ International Conference on Intelligent Robots and Systems
ISSN:
2153-0858
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Learning a robot motor skill from scratch is impractically slow; so much so that in practice, learning must typically be bootstrapped using human demonstration. However, relying on human demonstration necessarily degrades the autonomy of robots that must learn a wide variety of skills over their operational lifetimes. We propose using kinematic motion planning as a completely autonomous, sample efficient way to bootstrap motor skill learning for object manipulation. We demonstrate the use of motion planners to bootstrap motor skills in two complex object manipulation scenarios with different policy representations: opening a drawer with a dynamic movement primitive representation, and closing a microwave door with a deep neural network policy. We also show how our method can bootstrap a motor skill for the challenging dynamic task of learning to hit a ball off a tee, where a kinematic plan based on treating the scene as static is insufficient to solve the task, but sufficient to bootstrap a more dynamic policy. In all three cases, our method is competitive with human-demonstrated initialization, and significantly outperforms starting with a random policy. This approach enables robots to to efficiently and autonomously learn motor policies for dynamic tasks without human demonstration. 
    more » « less
  2. Disentangling the sources of visual motion in a dynamic scene during self-movement or ego motion is important for autonomous navigation and tracking. In the dynamic image segments of a video frame containing independently moving objects, optic flow relative to the next frame is the sum of the motion fields generated due to camera and object motion. The traditional ego-motion estimation methods assume the scene to be static, and the recent deep learning-based methods do not separate pixel velocities into object- and ego-motion components. We propose a learning-based approach to predict both ego-motion parameters and object-motion field (OMF) from image sequences using a convolutional autoencoder while being robust to variations due to the unconstrained scene depth. This is achieved by: 1) training with continuous ego-motion constraints that allow solving for ego-motion parameters independently of depth and 2) learning a sparsely activated overcomplete ego-motion field (EMF) basis set, which eliminates the irrelevant components in both static and dynamic segments for the task of ego-motion estimation. In order to learn the EMF basis set, we propose a new differentiable sparsity penalty function that approximates the number of nonzero activations in the bottleneck layer of the autoencoder and enforces sparsity more effectively than L1- and L2-norm-based penalties. Unlike the existing direct ego-motion estimation methods, the predicted global EMF can be used to extract OMF directly by comparing it against the optic flow. Compared with the state-of-the-art baselines, the proposed model performs favorably on pixelwise object- and ego-motion estimation tasks when evaluated on real and synthetic data sets of dynamic scenes. 
    more » « less
  3. We present an end-to-end method for capturing the dynamics of 3D human characters and translating them for synthesizing new, visually-realistic motion sequences. Conventional methods employ sophisticated, but generic, control approaches for driving the joints of articulated characters, paying little attention to the distinct dynamics of human joint movements. In contrast, our approach attempts to synthesize human-like joint movements by exploiting a biologically-plausible, compact network of spiking neurons that drive joint control in primates and rodents. We adapt the controller architecture by introducing learnable components and propose an evolutionary algorithm for training the spiking neural network architectures and capturing diverse joint dynamics. Our method requires only a few samples for capturing the dynamic properties of a joint's motion and exploits the biologically-inspired, trained controller for its reconstruction. More importantly, it can transfer the captured dynamics to new visually-plausible motion sequences. To enable user-dependent tailoring of the resulting motion sequences, we develop an interactive framework that allows for editing and real-time visualization of the controlled 3D character. We also demonstrate the applicability of our method to real human motion capture data by learning the hand joint dynamics from a gesture dataset and using our framework to reconstruct the gestures with our 3D animated character. The compact architecture of our joint controller emerging from its biologically-realistic design, and the inherent capacity of our evolutionary learning algorithm for parallelization, suggest that our approach could provide an efficient and scalable alternative for synthesizing 3D character animations with diverse and visually-realistic motion dynamics. 
    more » « less
  4. null (Ed.)
    Abstract Our goal is to understand and optimize human concept learning by predicting the ease of learning of a particular exemplar or category. We propose a method for estimating ease values, quantitative measures of ease of learning, as an alternative to conducting costly empirical training studies. Our method combines a psychological embedding of domain exemplars with a pragmatic categorization model. The two components are integrated using a radial basis function network (RBFN) that predicts ease values. The free parameters of the RBFN are fit using human similarity judgments, circumventing the need to collect human training data to fit more complex models of human categorization. We conduct two category-training experiments to validate predictions of the RBFN. We demonstrate that an instance-based RBFN outperforms both a prototype-based RBFN and an empirical approach using the raw data. Although the human data were collected across diverse experimental conditions, the predicted ease values strongly correlate with human learning performance. Training can be sequenced by (predicted) ease, achieving what is known as fading in the psychology literature and curriculum learning in the machine-learning literature, both of which have been shown to facilitate learning. 
    more » « less
  5. null (Ed.)
    Abstract Coral reef ecosystems support significant biological activities and harbor huge diversity, but they are facing a severe crisis driven by anthropogenic activities and climate change. An important behavioral trait of the coral holobiont is coral motion, which may play an essential role in feeding, competition, reproduction, and thus survival and fitness. Therefore, characterizing coral behavior through motion analysis will aid our understanding of basic biological and physical coral functions. However, tissue motion in the stony scleractinian corals that contribute most to coral reef construction are subtle and may be imperceptible to both the human eye and commonly used imaging techniques. Here we propose and apply a systematic approach to quantify and visualize subtle coral motion across a series of light and dark cycles in the scleractinian coral Montipora capricornis . We use digital image correlation and optical flow techniques to quantify and characterize minute coral motions under different light conditions. In addition, as a visualization tool, motion magnification algorithm magnifies coral motions in different frequencies, which explicitly displays the distinctive dynamic modes of coral movement. Specifically, our assessment of displacement, strain, optical flow, and mode shape quantify coral motion under different light conditions, and they all show that M. capricornis exhibits more active motions at night compared to day. Our approach provides an unprecedented insight into micro-scale coral movement and behavior through macro-scale digital imaging, thus offering a useful empirical toolset for the coral research community. 
    more » « less