Understanding how we grasp objects with our hands has important applications in areas like robotics and mixed reality. However, this challenging problem requires accurate modeling of the contact between hands and objects. To capture grasps, existing methods use skeletons, meshes, or parametric models that can cause misalignments resulting in inaccurate contacts. We present MANUS, a method for Markerless Hand-Object Grasp Capture using Articulated 3D Gaussians. We build a novel articulated 3D Gaussians representation that extends 3D Gaussian splatting for high-fidelity representation of articulating hands. Since our representation uses Gaussian primitives, it enables us to efficiently and accurately estimate contacts between the hand and the object. For the most accurate results, our method requires tens of camera views that current datasets do not provide. We therefore build MANUS Grasps dataset, a new dataset that contains hand-object grasps viewed from 53 cameras across 30+ scenes, 3 subjects, and comprising over 7M frames. In addition to extensive qualitative results, we also show that our method outperforms others on a quantitative contact evaluation method that uses paint transfer from the object to the hand.
more »
« less
Using Geometric Features to Represent Near-Contact Behavior in Robotic Grasping
In this paper we define two feature representations for grasping. These representations capture hand-object geometric relationships at the near-contact stage - before the fingers close around the object. Their benefits are: 1) They are stable under noise in both joint and pose variation. 2) They are largely hand and object agnostic, enabling direct comparison across different hand morphologies. 3) Their format makes them suitable for direct application of machine learning techniques developed for images. We validate the representations by: 1) Demonstrating that they can accurately predict the distribution of ε-metric values generated by kinematic noise. I.e., they capture much of the information inherent in contact points and force vectors without the corresponding instabilities. 2) Training a binary grasp success classifier on a real-world data set consisting of 588 grasps.
more »
« less
- Award ID(s):
- 1730126
- PAR ID:
- 10130017
- Date Published:
- Journal Name:
- International Conference on Robotics and Applications
- Page Range / eLocation ID:
- 272772-277772 to 2777
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
We tackle the novel problem of predicting 3D hand motion and contact maps (or Interaction Trajectories) given a single RGB view, action text, and a 3D contact point on the object as input. Our approach consists of (1) Interaction Codebook: a VQVAE model to learn a latent codebook of hand poses and contact points, effectively tokenizing interaction trajectories, (2) Interaction Predictor: a transformer-decoder module to predict the interaction trajectory from test time inputs by using an indexer module to retrieve a latent affordance from the learned codebook. To train our model, we develop a data engine that extracts 3D hand poses and contact trajectories from the diverse HoloAssist dataset. We evaluate our model on a benchmark that is 2.5-10X larger than existing works, in terms of diversity of objects and interactions observed, and test for generalization of the model across object categories, action categories, tasks, and scenes. Experimental results show the effectiveness of our approach over transformer & diffusion baselines across all settings.more » « less
-
As technology advances, the need for safe, efficient, and collaborative human-robot-teams has become increasingly important. One of the most fundamental collaborative tasks in any setting is the object handover. Human-to-robot handovers can take either of two approaches: (1) direct hand-to-hand or (2) indirect hand-to-placement-to-pick-up. The latter approach ensures minimal contact between the human and robot but can also result in increased idle time due to having to wait for the object to first be placed down on a surface. To minimize such idle time, the robot must preemptively predict the human intent of where the object will be placed. Furthermore, for the robot to preemptively act in any sort of productive manner, predictions and motion planning must occur in real-time. We introduce a novel prediction-planning pipeline that allows the robot to preemptively move towards the human agent's intended placement location using gaze and gestures as model inputs. In this paper, we investigate the performance and drawbacks of our early intent predictor-planner as well as the practical benefits of using such a pipeline through a human-robot case study.more » « less
-
This paper explores the problem of autonomous, in-hand regrasping-the problem of moving from an initial grasp on an object to a desired grasp using the dexterity of a robot's fingers. We propose a planner for this problem which alternates between finger gaiting, and in-grasp manipulation. Finger gaiting enables the robot to move a single finger to a new contact location on the object, while the remaining fingers stably hold the object. In-grasp manipulation moves the object to a new pose relative to the robot's palm, while maintaining the contact locations between the hand and object. Given the object's geometry (as a mesh), the hand's kinematic structure, and the initial and desired grasps, we plan a sequence of finger gaits and object reposing actions to reach the desired grasp without dropping the object. We propose an optimization based approach and report in-hand regrasping plans for 5 objects over 5 in-hand regrasp goals each. The plans generated by our planner are collision free and guarantee kinematic feasibility.more » « less
-
Many models of intuitive physical reasoning posit some kind of mental simulation mechanism, yet everyday environments frequently contain far more objects than people could plausibly represent with their limited cognitive capacity. What determines which objects are actually included in our representations? We asked participants to predict how a ball will bounce through a complex field of obstacles, and probed working memory for objects in the scene that were more and less likely to be relevant to the ball’s trajectory. We evaluate different accounts of relevance and find that successful object memory is best predicted by how frequently a ball’s trajectory is expected to contact that object under a probabilistic simulation model. This suggests that people construct representations for mental simulation efficiently and dynamically, on the fly, by adding objects “just in time”: only when they are expected to become relevant for the next stage of simulation.more » « less
An official website of the United States government

