skip to main content


Search for: All records

Award ID contains: 1826258

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Free, publicly-accessible full text available May 19, 2024
  2. Robot grasp typically follows five stages: object detection, object localisation, object pose estimation, grasp pose estimation, and grasp planning. We focus on object pose estimation. Our approach relies on three pieces of information: multiple views of the object, the camera’s extrinsic parameters at those viewpoints, and 3D CAD models of objects. The first step involves a standard deep learning backbone (FCN ResNet) to estimate the object label, semantic segmentation, and a coarse estimate of the object pose with respect to the camera. Our novelty is using a refinement module that starts from the coarse pose estimate and refines it by optimisation through differentiable rendering. This is a purely vision-based approach that avoids the need for other information such as point cloud or depth images. We evaluate our object pose estimation approach on the ShapeNet dataset and show improvements over the state of the art. We also show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates on the Object Clutter Indoor Dataset (OCID) Grasp dataset, as computed using standard practice. 
    more » « less
  3. Practical robot applications that require grasping often fail due to failed grasp planning, and a good grasp quality measure is the key to successful grasp planning. In this paper, we developed a novel grasp quality measure that quantifies and evaluates grasp quality in real-time. To quantify the grasp quality, we compute a set of object movement features from analyzing the interaction between the gripper and the object’s projections in the image space. The normalizations and weights of the features are tuned to make practical and intuitive grasp quality predictions. To evaluate our grasp quality measure, we conducted a real robot grasping experiment with 1000 robot grasp trials on ten household objects to examine the relationship between our grasp scores and the actual robot grasping results. The results show that the average grasp success rate increases, and the average amount of undesired object movement decrease as the calculated grasp score increases, which validates our quality measure. We achieved a 100% grasp success rate from 100 grasps of the ten objects when using our grasp quality measure in planning top quality grasps. 
    more » « less
  4. null (Ed.)
  5. Persons with disabilities often rely on caregivers or family members to assist in their daily living activities. Robotic assistants can provide an alternative solution if intuitive user interfaces are designed for simple operations. Current humanrobot interfaces are still far from being able to operate in an intuitive way when used for complex activities of daily living (ADL). In this era of smartphones that are packed with sensors, such as accelerometers, gyroscopes and a precise touch screen, robot controls can be interfaced with smartphones to capture the user’s intended operation of the robot assistant. In this paper, we review the current popular human-robot interfaces, and we present three novel human-robot smartphone-based interfaces to operate a robotic arm for assisting persons with disabilities in their ADL tasks. Useful smartphone data, including 3 dimensional orientation and 2 dimensional touchscreen positions, are used as control variables to the robot motion in Cartesian teleoperation. In this paper, we present the three control interfaces, their implementation on a smartphone to control a robotic arm, and a comparison between the results on using the three interfaces for three different ADL tasks. The developed interfaces provide intuitiveness, low cost, and environmental adaptability. 
    more » « less