skip to main content


Title: Tactile Behaviors with the Vision-Based Tactile Sensor FingerVision
This paper introduces a vision-based tactile sensor FingerVision, and explores its usefulness in tactile behaviors. FingerVision consists of a transparent elastic skin marked with dots, and a camera that is easy to fabricate, low cost, and physically robust. Unlike other vision-based tactile sensors, the complete transparency of the FingerVision skin provides multimodal sensation. The modalities sensed by FingerVision include distributions of force and slip, and object information such as distance, location, pose, size, shape, and texture. The slip detection is very sensitive since it is obtained by computer vision directly applied to the output from the FingerVision camera. It provides high-resolution slip detection, which does not depend on the contact force, i.e., it can sense slip of a lightweight object that generates negligible contact force. The tactile behaviors explored in this paper include manipulations that utilize this feature. For example, we demonstrate that grasp adaptation with FingerVision can grasp origami, and other deformable and fragile objects such as vegetables, fruits, and raw eggs.  more » « less
Award ID(s):
1717066
PAR ID:
10156255
Author(s) / Creator(s):
;
Date Published:
Journal Name:
International Journal of Humanoid Robotics
Volume:
16
Issue:
03
ISSN:
0219-8436
Page Range / eLocation ID:
1940002
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Madden, John D. ; Anderson, Iain A. ; Shea, Herbert R. (Ed.)
    Current robotic sensing is mainly visual, which is useful up until the point of contact. To understand how an object is being gripped, tactile feedback is needed. Human grasp is gentle yet firm, with integrated tactile touch feedback. Ras Labs makes Synthetic Muscle™, which is a class of electroactive polymer (EAP) based materials and actuators that sense pressure from gentle touch to high impact, controllably contract and expand at low voltage (battery levels), and attenuate force. The development of this technology towards sensing has provided for fingertip-like sensors that were able to detect very light pressures down to 0.01 N and even 0.005 N, with a wide pressure range to 25 N and more and with high linearity. By using these soft yet robust Tactile Fingertip™ sensors, immediate feedback was generated at the first point of contact. Because these elastomeric pads provided a soft compliant interface, the first point of contact did not apply excessive force, allowing for gentle object handling and control of the force applied to the object. The Tactile Fingertip could also detect a change in pressure location on its surface, i.e., directional glide provided real time feedback, making it possible to detect and prevent slippage by then adjusting the grip strength. Machine learning (ML) and artificial intelligence (AI) were integrated into these sensors for object identification along with the determination of good grip (position, grip force, no slip, no wobble) for pick-and-place and other applications. Synthetic Muscle™ is also being retrofitted as actuators into a human hand-like biomimetic gripper. The combination of EAP shape-morphing and sensing promises the potential for robotic grippers with human hand-like control and tactile sensing. This is expected to advance robotics, whether it is for agriculture, medical surgery, therapeutic or personal care, or in extreme environments where humans cannot enter, including with contagions that have no cure, as well as for collaborative robotics to allow humans and robots to intuitively work safely and effectively together. 
    more » « less
  2. Abstract

    The mechanoreceptors of the human tactile sensory system contribute to natural grasping manipulations in everyday life. However, in the case of robot systems, attempts to emulate humans’ dexterity are still limited by tactile sensory feedback. In this work, a soft optical lightguide is applied as an afferent nerve fiber in a tactile sensory system. A skin‐like soft silicone material is combined with a bristle friction model, which is capable of fast and easy fabrication. Due to this novel design, the soft sensor can provide not only normal force (up to 5 Newtons) but also lateral force information generated by stick‐slip processes. Through a static force test and slip motion test, its ability to measure normal forces and to detect stick‐slip events is demonstrated. Finally, using a robotic gripper, real‐time control applications are investigated where the sensor helps the gripper apply sufficient force to grasp objects without slipping.

     
    more » « less
  3. Tactile sensing is essential for robots to perceive and react to the environment. However, it remains a challenge to make large-scale and flexible tactile skins on robots. Industrial machine knitting provides solutions to manufacture customiz-able fabrics. Along with functional yarns, it can produce highly customizable circuits that can be made into tactile skins for robots. In this work, we present RobotSweater, a machine-knitted pressure-sensitive tactile skin that can be easily applied on robots. We design and fabricate a parameterized multi-layer tactile skin using off-the-shelf yarns, and characterize our sensor on both a flat testbed and a curved surface to show its robust contact detection, multi-contact localization, and pressure sensing capabilities. The sensor is fabricated using a well-established textile manufacturing process with a programmable industrial knitting machine, which makes it highly customizable and low-cost. The textile nature of the sensor also makes it easily fit curved surfaces of different robots and have a friendly appearance. Using our tactile skins, we conduct closed-loop control with tactile feedback for two applications: (1) human lead-through control of a robot arm, and (2) human-robot interaction with a mobile robot. 
    more » « less
  4. Robot grasp typically follows five stages: object detection, object localisation, object pose estimation, grasp pose estimation, and grasp planning. We focus on object pose estimation. Our approach relies on three pieces of information: multiple views of the object, the camera’s extrinsic parameters at those viewpoints, and 3D CAD models of objects. The first step involves a standard deep learning backbone (FCN ResNet) to estimate the object label, semantic segmentation, and a coarse estimate of the object pose with respect to the camera. Our novelty is using a refinement module that starts from the coarse pose estimate and refines it by optimisation through differentiable rendering. This is a purely vision-based approach that avoids the need for other information such as point cloud or depth images. We evaluate our object pose estimation approach on the ShapeNet dataset and show improvements over the state of the art. We also show that the estimated object pose results in 99.65% grasp accuracy with the ground truth grasp candidates on the Object Clutter Indoor Dataset (OCID) Grasp dataset, as computed using standard practice. 
    more » « less
  5. A long-standing question in robot hand design is how accurate tactile sensing must be. This paper uses simulated tactile signals and the reinforcement learning (RL) framework to study the sensing needs in grasping systems. Our first experiment investigates the need for rich tactile sensing in the rewards of RL-based grasp refinement algorithms for multi-fingered robotic hands. We systematically integrate different levels of tactile data into the rewards using analytic grasp stability metrics. We find that combining information on contact positions, normals, and forces in the reward yields the highest average success rates of 95.4% for cuboids, 93.1% for cylinders, and 62.3% for spheres across wrist position errors between 0 and 7 centimeters and rotational errors between 0 and 14 degrees. This contact-based reward outperforms a non-tactile binary-reward baseline by 42.9%. Our follow-up experiment shows that when training with tactile-enabled rewards, the use of tactile information in the control policy’s state vector is drastically reducible at only a slight performance decrease of at most 6.6% for no tactile sensing in the state. Since policies do not require access to the reward signal at test time, our work implies that models trained on tactile-enabled hands are deployable to robotic hands with a smaller sensor suite, potentially reducing cost dramatically. 
    more » « less