skip to main content

Title: Improving Grasp Classification through Spatial Metrics Available from Sensors
We present a method for classifying the quality of near-contact grasps using spatial metrics that are recoverable from sensor data. Current methods often rely on calculating precise contact points, which are difficult to calculate in real life, or on tactile sensors or image data, which may be unavailable for some applications. Our method, in contrast, uses a mix of spatial metrics that do not depend on the fingers being in contact with the object, such as the object's approximate size and location. The grasp quality can be calculated {\em before} the fingers actually contact the object, enabling near-grasp quality prediction. Using a random forest classifier, the resulting system is able to predict grasp quality with 96\% accuracy using spatial metrics based on the locations of the robot palm, fingers and object. Furthermore, it can maintain an accuracy of 90\% when exposed to 10\% noise across all its inputs.
; ; ; ; ; ; ; ;
Award ID(s):
Publication Date:
Journal Name:
IEEE International Conference on Robotics and Automation
Sponsoring Org:
National Science Foundation
More Like this
  1. In this paper we define two feature representations for grasping. These representations capture hand-object geometric relationships at the near-contact stage - before the fingers close around the object. Their benefits are: 1) They are stable under noise in both joint and pose variation. 2) They are largely hand and object agnostic, enabling direct comparison across different hand morphologies. 3) Their format makes them suitable for direct application of machine learning techniques developed for images. We validate the representations by: 1) Demonstrating that they can accurately predict the distribution of ε-metric values generated by kinematic noise. I.e., they capture much of the information inherent in contact points and force vectors without the corresponding instabilities. 2) Training a binary grasp success classifier on a real-world data set consisting of 588 grasps.
  2. This paper explores the problem of autonomous, in-hand regrasping-the problem of moving from an initial grasp on an object to a desired grasp using the dexterity of a robot's fingers. We propose a planner for this problem which alternates between finger gaiting, and in-grasp manipulation. Finger gaiting enables the robot to move a single finger to a new contact location on the object, while the remaining fingers stably hold the object. In-grasp manipulation moves the object to a new pose relative to the robot's palm, while maintaining the contact locations between the hand and object. Given the object's geometry (as a mesh), the hand's kinematic structure, and the initial and desired grasps, we plan a sequence of finger gaits and object reposing actions to reach the desired grasp without dropping the object. We propose an optimization based approach and report in-hand regrasping plans for 5 objects over 5 in-hand regrasp goals each. The plans generated by our planner are collision free and guarantee kinematic feasibility.
  3. In this work, we discuss the design of soft robotic fingers for robust precision grasping. Through a conceptual analysis of the finger shape and compliance during grasping, we confirm that antipodal grasps are more stable when contact with the object occurs on the side of the fingers (i.e., pinch grasps) instead of the fingertips. In addition, we show that achieving such pinch grasps with soft fingers for a wide variety of objects requires at least two independent bending segments each, but only requires actuation in the proximal segment. Using a physical prototype hand, we evaluate the improvement in pinch-grasping performance of this two-segment proximally actuated finger design compared to more typical, uniformly actuated fingers. Through an exploration of the relative lengths of the two finger segments, we show the tradeoff between power grasping strength and precision grasping capabilities for fingers with passive distal segments. We characterize grasping on the basis of the acquisition region, object sizes, rotational stability, and robustness to external forces. Based on these metrics, we confirm that higher-quality precision grasping is achieved through pinch grasping via fingers with the proximally actuated finger design compared to uniformly actuated fingers. However, power grasping is still best performed with uniformlymore »actuated fingers. Accordingly, soft continuum fingers should be designed to have at least two independently actuated serial segments, since such fingers can maximize grasping performance during both power and precision grasps through controlled adaptation between uniform and proximally actuated finger structures.

    « less
  4. The 3D shape of a robot’s end-effector plays a critical role in determining it’s functionality and overall performance. Many of today’s industrial applications rely on highly customized gripper design for a given task to ensure the system’s robustness and accuracy. However, the process of manual hardware design is both costly and time-consuming, and the quality of the design is also dependent on the engineer’s experience and domain expertise, which can easily be out-dated or inaccurate. The goal of this paper is to use machine learning algorithms to automate this design process and generate task-specific gripper designs that satisfy a set of pre-defined design objectives. We model the design objectives by training a Fitness network to predict their values for a pair of gripper fingers and a grasp object. This Fitness network is then used to provide training supervision to a 3D Generative network that produces a pair of 3D finger geometries for the target grasp object. Our experiments demonstrate that the proposed 3D generative design framework generates parallel jaw gripper finger shapes that achieve more stable and robust grasps as compared to other general-purpose and task-specific gripper design algorithms.
  5. Vacuum-based end effectors are widely used in in- dustry and are often preferred over parallel-jaw and multifinger grippers due to their ability to lift objects with a single point of contact. Suction grasp planners often target planar surfaces on point clouds near the estimated centroid of an object. In this paper, we propose a compliant suction contact model that computes the quality of the seal between the suction cup and local target surface and a measure of the ability of the suction grasp to resist an external gravity wrench. To characterize grasps, we estimate robustness to perturbations in end-effector and object pose, material properties, and external wrenches. We analyze grasps across 1,500 3D object models to generate Dex- Net 3.0, a dataset of 2.8 million point clouds, suction grasps, and grasp robustness labels. We use Dex-Net 3.0 to train a Grasp Quality Convolutional Neural Network (GQ-CNN) to classify robust suction targets in point clouds containing a single object. We evaluate the resulting system in 350 physical trials on an ABB YuMi fitted with a pneumatic suction gripper. When eval- uated on novel objects that we categorize as Basic (prismatic or cylindrical), Typical (more complex geometry), and Adversarial (with few availablemore »suction-grasp points) Dex-Net 3.0 achieves success rates of 98%, 82%, and 58% respectively, improving to 81% in the latter case when the training set includes only adversarial objects. Code, datasets, and supplemental material can be found at« less