skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Title: LeTac-MPC: Learning Model Predictive Control for Tactile-Reactive Grasping
Grasping is a crucial task in robotics, necessitating tactile feedback and reactive grasping adjustments for robust grasping of objects under various conditions and with differing physical properties. In this paper, we introduce LeTac-MPC, a learning-based model predictive control (MPC) for tactile-reactive grasping. Our approach enables the gripper to grasp objects with different physical properties on dynamic and force-interactive tasks. We utilize a vision-based tactile sensor, GelSight [1], which is capable of perceiving high-resolution tactile feedback that contains information on the physical properties and states of the grasped object. LeTac-MPC incorporates a differentiable MPC layer designed to model the embeddings extracted by a neural network (NN) from tactile feedback. This design facilitates convergent and robust grasping control at a frequency of 25 Hz. We propose a fully automated data collection pipeline and collect a dataset only using standardized blocks with different physical properties. However, our trained controller can generalize to daily objects with different sizes, shapes, materials, and textures. The experimental results demonstrate the effectiveness and robustness of the proposed approach. We compare LeTac-MPC with two purely model-based tactile-reactive controllers (MPC and PD) and open-loop grasping. Our results show that LeTac-MPC has optimal performance in dynamic and force-interactive tasks and optimal generalizability. We release our code and dataset at https://github.com/ZhengtongXu/LeTac-MPC.  more » « less
Award ID(s):
2423068
PAR ID:
10599514
Author(s) / Creator(s):
;
Publisher / Repository:
Institute of Electrical and Electronics Engineers (IEEE)
Date Published:
Journal Name:
IEEE Transactions on Robotics
Volume:
40
ISSN:
1552-3098
Page Range / eLocation ID:
4376 to 4395
Subject(s) / Keyword(s):
Tactile control, deep learning in robotics and automation, perception for grasping and manipulation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. null (Ed.)
    The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we introduce a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property. 
    more » « less
  2. Abstract— Humans leverage multiple sensor modalities when interacting with objects and discovering their intrinsic properties. Using the visual modality alone is insufficient for deriving intuition behind object properties (e.g., which of two boxes is heavier), making it essential to consider non-visual modalities as well, such as the tactile and auditory. Whereas robots may leverage various modalities to obtain object property understanding via learned exploratory interactions with objects (e.g., grasping, lifting, and shaking behaviors), challenges remain: the implicit knowledge acquired by one robot via object exploration cannot be directly leveraged by another robot with different morphology, because the sensor models, observed data distributions, and interaction capabilities are different across these different robot configurations. To avoid the costly process of learning interactive object perception tasks from scratch, we propose a multi-stage projection framework for each new robot for transferring implicit knowledge of object properties across heterogeneous robot morphologies. We evaluate our approach on the object-property recognition and object-identity recognition tasks, using a dataset containing two heterogeneous robots that perform 7,600 object interactions. Results indicate that knowledge can be transferred across robots, such that a newly-deployed robot can bootstrap its recognition models without exhaustively exploring all objects. We also propose a data augmentation technique and show that this technique improves the generalization of models. We release code, datasets, and additional results, here: https: //github.com/gtatiya/Implicit-Knowledge-Transfer. 
    more » « less
  3. Abstract The mechanoreceptors of the human tactile sensory system contribute to natural grasping manipulations in everyday life. However, in the case of robot systems, attempts to emulate humans’ dexterity are still limited by tactile sensory feedback. In this work, a soft optical lightguide is applied as an afferent nerve fiber in a tactile sensory system. A skin‐like soft silicone material is combined with a bristle friction model, which is capable of fast and easy fabrication. Due to this novel design, the soft sensor can provide not only normal force (up to 5 Newtons) but also lateral force information generated by stick‐slip processes. Through a static force test and slip motion test, its ability to measure normal forces and to detect stick‐slip events is demonstrated. Finally, using a robotic gripper, real‐time control applications are investigated where the sensor helps the gripper apply sufficient force to grasp objects without slipping. 
    more » « less
  4. Tactile sensing has been increasingly utilized in robot control of unknown objects to infer physical properties and optimize manipulation. However, there is limited understanding about the contribution of different sensory modalities during interactive perception in complex interaction both in robots and in humans. This study investigated the effect of visual and haptic information on humans’ exploratory interactions with a ‘cup of coffee’, an object with nonlinear internal dynamics. Subjects were instructed to rhythmically transport a virtual cup with a rolling ball inside between two targets at a specified frequency, using a robotic interface. The cup and targets were displayed on a screen, and force feedback from the cup-andball dynamics was provided via the robotic manipulandum. Subjects were encouraged to explore and prepare the dynamics by “shaking” the cup-and-ball system to find the best initial conditions prior to the task. Two groups of subjects received the full haptic feedback about the cup-and-ball movement during the task; however, for one group the ball movement was visually occluded. Visual information about the ball movement had two distinctive effects on the performance: it reduced preparation time needed to understand the dynamics and, importantly, it led to simpler, more linear input-output interactions between hand and object. The results highlight how visual and haptic information regarding nonlinear internal dynamics have distinct roles for the interactive perception of complex objects. 
    more » « less
  5. Robotic grasping is successful when a robot can sense and grasp an object without letting it slip. Beyond industrial robotic tasks, there are two main robotic grasping methods. The first is planning-based grasping where the object geometry is known beforehand and stable grasps are calculated using algorithms [1]. The second uses tactile feedback. Currently, there are capacitive sensors placed beneath stiff pads on the front of robotic fingers [2]. With post-execution grasp adjustment procedures to estimate grasp stability, a support vector machine classifier can distinguish stable and unstable grasps. The accuracy across the classes of tested objects is 81% [1]. We are proposing to improve the classifier's accuracy by wrapping flexible sensors around the robotic finger to gain information from the edges and sides of the finger. 
    more » « less