Humans can learn to manipulate new objects by simply watching others; providing robots with the ability to learn from such demonstrations would enable a natural interface specifying new behaviors. This work develops Robot See Robot Do (RSRD), a method for imitating articulated object manipulation from a single monocular RGB human demonstration given a single static multi-view object scan. We first propose 4D Differentiable Part Models (4D-DPM), a method for recovering 3D part motion from a monocular video with differentiable rendering. This analysis-by-synthesis approach uses part-centric feature fields in an iterative optimization which enables the use of geometric regularizers to recover 3D motions from only a single video. Given this 4D reconstruction, the robot replicates object trajectories by planning bimanual arm motions that induce the demonstrated object part motion. By representing demonstrations as part-centric trajectories, RSRD focuses on replicating the demonstration's intended behavior while considering the robot's own morphological limits, rather than attempting to reproduce the hand's motion. We evaluate 4D-DPM's 3D tracking accuracy on ground truth annotated 3D part trajectories and RSRD's physical execution performance on 9 objects across 10 trials each on a bimanual YuMi robot. Each phase of RSRD achieves an average of 87% success rate, for a total end-to-end success rate of 60% across 90 trials. Notably, this is accomplished using only feature fields distilled from large pretrained vision models -- without any task-specific training, fine-tuning, dataset collection, or annotation. 
                        more » 
                        « less   
                    
                            
                            Object Motion Guided Human Motion Synthesis
                        
                    
    
            Modeling human behaviors in contextual environments has a wide range of applications in character animation, embodied AI, VR/AR, and robotics. In real-world scenarios, humans frequently interact with the environment and manipulate various objects to complete daily tasks. In this work, we study the problem of full-body human motion synthesis for the manipulation of large-sized objects. We propose Object MOtion guided human MOtion synthesis (OMOMO), a conditional diffusion framework that can generate full-body manipulation behaviors from only the object motion. Since naively applying diffusion models fails to precisely enforce contact constraints between the hands and the object, OMOMO learns two separate denoising processes to first predict hand positions from object motion and subsequently synthesize full-body poses based on the predicted hand positions. By employing the hand positions as an intermediate representation between the two denoising processes, we can explicitly enforce contact constraints, resulting in more physically plausible manipulation motions. With the learned model, we develop a novel system that captures full-body human manipulation motions by simply attaching a smartphone to the object being manipulated. Through extensive experiments, we demonstrate the effectiveness of our proposed pipeline and its ability to generalize to unseen objects. Additionally, as high-quality human-object interaction datasets are scarce, we collect a large-scale dataset consisting of 3D object geometry, object motion, and human motion. Our dataset contains human-object interaction motion for 15 objects, with a total duration of approximately 10 hours. 
        more » 
        « less   
        
    
                            - Award ID(s):
- 2120095
- PAR ID:
- 10521377
- Publisher / Repository:
- ACM
- Date Published:
- Journal Name:
- ACM Transactions on Graphics
- Volume:
- 42
- Issue:
- 6
- ISSN:
- 0730-0301
- Page Range / eLocation ID:
- 1 to 11
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
- 
            
- 
            We consider the problem of in-hand dexterous manipulation with a focus on unknown or uncertain hand–object parameters, such as hand configuration, object pose within hand, and contact positions. In particular, in this work we formulate a generic framework for hand–object configuration estimation using underactuated hands as an example. Owing to the passive reconfigurability and the lack of encoders in the hand’s joints, it is challenging to estimate, plan, and actively control underactuated manipulation. By modeling the grasp constraints, we present a particle filter-based framework to estimate the hand configuration. Specifically, given an arbitrary grasp, we start by sampling a set of hand configuration hypotheses and then randomly manipulate the object within the hand. While observing the object’s movements as evidence using an external camera, which is not necessarily calibrated with the hand frame, our estimator calculates the likelihood of each hypothesis to iteratively estimate the hand configuration. Once converged, the estimator is used to track the hand configuration in real time for future manipulations. Thereafter, we develop an algorithm to precisely plan and control the underactuated manipulation to move the grasped object to desired poses. In contrast to most other dexterous manipulation approaches, our framework does not require any tactile sensing or joint encoders, and can directly operate on any novel objects, without requiring a model of the object a priori. We implemented our framework on both the Yale Model O hand and the Yale T42 hand. The results show that the estimation is accurate for different objects, and that the framework can be easily adapted across different underactuated hand models. In the end, we evaluated our planning and control algorithm with handwriting tasks, and demonstrated the effectiveness of the proposed framework.more » « less
- 
            Knowing the object grabbed by a hand can offer essential contextual information for interaction between the human and the physical world. This paper presents a novel system, ViObject, for passive object recognition that uses accelerometer and gyroscope sensor data from commodity smartwatches to identify untagged everyday objects. The system relies on the vibrations caused by grabbing objects and does not require additional hardware or human effort. ViObject's ability to recognize objects passively can have important implications for a wide range of applications, from smart home automation to healthcare and assistive technologies. In this paper, we present the design and implementation of ViObject, to address challenges such as motion interference, different object-touching positions, different grasp speeds/pressure, and model customization to new users and new objects. We evaluate the system's performance using a dataset of 20 objects from 20 participants and show that ViObject achieves an average accuracy of 86.4%. We also customize models for new users and new objects, achieving an average accuracy of 90.1%. Overall, ViObject demonstrates a novel technology concept of passive object recognition using commodity smartwatches and opens up new avenues for research and innovation in this area.more » « less
- 
            The recent ubiquity of high-framerate (120 fps and higher) handheld cameras creates the opportunity to study human grasping at a greater level of detail than normal speed cameras allow. We first collected 91 slow-motion interactions with objects in a convenience store setting. We then annotated the actions through the lenses of various existing manipulation taxonomies. We found manipulation, particularly the process of forming a grasp, is complicated and proceeds quickly. Our dataset shows that there are many ways that people deal with clutter in order to form a strong grasp of an object. It also reveals several errors and how people recover from them. Though annotating motions in detail is time-consuming, the annotation systems we used nevertheless leave out important aspects of understanding manipulation actions, such as how the environment is functioning as a “finger” of sorts, how different parts of the hand can be involved in different grasping tasks, and high-level intent.more » « less
- 
            Constraining contacts to remain fixed on an object during manipulation limits the potential workspace size, as motion is subject to the hand’s kinematic topology. Finger gaiting is one way to alleviate such restraints. It allows contacts to be freely broken and remade so as to operate on different manipulation manifolds. This capability, however, has traditionally been difficult or impossible to practically realize. A finger gaiting system must simultaneously plan for and control forces on the object while maintaining stability during contact switching. This letter alleviates the traditional requirement by taking advantage of system compliance, allowing the hand to more easily switch contacts while maintaining a stable grasp. Our method achieves complete SO(3) finger gaiting control of grasped objects against gravity by developing a manipulation planner that operates via orthogonal safe modes of a compliant, underactuated hand absent of tactile sensors or joint encoders. During manipulation, a low-latency 6D pose object tracker provides feedback via vision, allowing the planner to update its plan online so as to adaptively recover from trajectory deviations. The efficacy of this method is showcased by manipulating both convex and non-convex objects on a real robot. Its robustness is evaluated via perturbation rejection and long trajectory goals. To the best of the authors’ knowledge, this is the first work that has autonomously achieved full SO(3) control of objects within-hand via finger gaiting and without a support surface, elucidating a valuable step towards realizing true robot in-hand manipulation capabilities.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                    