Robot manipulation and grasping mechanisms have received considerable attention in the recent past, leading to development of wide-range of industrial applications. This paper proposes the development of an autonomous robotic grasping system for object sorting application. RGB-D data is used by the robot for performing object detection, pose estimation, trajectory generation and object sorting tasks. The proposed approach can also handle grasping on certain objects chosen by users. Trained convolutional neural networks are used to perform object detection and determine the corresponding point cloud cluster of the object to be grasped. From the selected point cloud data, a grasp generatormore »
A Deep Learning-Based Autonomous Robot Manipulator for Sorting Application
bot manipulation and grasping mechanisms have
received considerable attention in the recent past, leading to
development of wide-range of industrial applications. This paper
proposes the development of an autonomous robotic grasping
system for object sorting application. RGB-D data is used by the
robot for performing object detection, pose estimation, trajectory
generation and object sorting tasks. The proposed approach
can also handle grasping on certain objects chosen by users.
Trained convolutional neural networks are used to perform object
detection and determine the corresponding point cloud cluster of
the object to be grasped. From the selected point cloud data, a
grasp generator algorithm outputs potential grasps. A grasp filter
then scores these potential grasps, and the highest-scored grasp
will be chosen to execute on a real robot. A motion planner
will generate collision-free trajectories to execute the chosen
grasp. The experiments on AUBO robotic manipulator show the
potentials of the proposed approach in the context of autonomous
object sorting with robust and fast sorting performance.
- Publication Date:
- NSF-PAR ID:
- 10282536
- Journal Name:
- 2020 Fourth IEEE International Conference on Robotic Computing
- Page Range or eLocation-ID:
- 298-305
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
There has been significant recent work on data-driven algorithms for learning general-purpose grasping policies. However, these policies can consis- tently fail to grasp challenging objects which are significantly out of the distribution of objects in the training data or which have very few high quality grasps. Moti- vated by such objects, we propose a novel problem setting, Exploratory Grasping, for efficiently discovering reliable grasps on an unknown polyhedral object via sequential grasping, releasing, and toppling. We formalize Exploratory Grasping as a Markov Decision Process where we assume that the robot can (1) distinguish stable poses of a polyhedral object ofmore »
-
Robotic grasping is successful when a robot can sense and grasp an object without letting it slip. Beyond industrial robotic tasks, there are two main robotic grasping methods. The first is planning-based grasping where the object geometry is known beforehand and stable grasps are calculated using algorithms [1]. The second uses tactile feedback. Currently, there are capacitive sensors placed beneath stiff pads on the front of robotic fingers [2]. With post-execution grasp adjustment procedures to estimate grasp stability, a support vector machine classifier can distinguish stable and unstable grasps. The accuracy across the classes of tested objects is 81% [1].more »
-
We propose an approach to multi-modal grasp detection that jointly predicts the probabilities that several types of grasps succeed at a given grasp pose. Given a partial point cloud of a scene, the algorithm proposes a set of feasible grasp candidates, then estimates the probabilities that a grasp of each type would succeed at each candidate pose. Predicting grasp success probabilities directly from point clouds makes our approach agnostic to the number and placement of depth sensors at execution time. We evaluate our system both in simulation and on a real robot with a Robotiq 3-Finger Adaptive Gripper and comparemore »
-
Grasping in dynamic environments presents a unique set of challenges. A stable and reachable grasp can become unreachable and unstable as the target object moves, motion planning needs to be adaptive and in real time, the delay in computation makes prediction necessary. In this paper, we present a dynamic grasping framework that is reachabilityaware and motion-aware. Specifically, we model the reachability space of the robot using a signed distance field which enables us to quickly screen unreachable grasps. Also, we train a neural network to predict the grasp quality conditioned on the current motion of the target. Using these asmore »