Despite advances in digitizing vision and hearing, touch still lacks an equivalent digital interface matching the fidelity of human perception. This gap limits the quality of digital tactile information and the realism of virtual experiences. Here, we introduce a step toward human-resolution haptics: a class of wearable tactile displays designed to match the spatial and temporal acuity of the human fingertip. Our device, VoxeLite, is a 0.1-millimeter-thick, 0.19-gram, skin-conformal array of individually addressable soft electroadhesive actuators (“nodes”). As users touch and move across surfaces, VoxeLite delivers high-resolution distributed forces via the nodes. Enabled by scalable microfabrication techniques, the display achieves actuator densities up to 110 nodes per square centimeter, produces stimuli up to 800 hertz, and remains transparent to real-world tactile input. We demonstrate its ability to render small-scale hapticons and virtual textures and transmit physical surfaces, validated through human psychophysics and biomimetic sensing. These findings position VoxeLite as a platform for human-resolution haptics in immersive interfaces, robotics, and digital touch communication.
more »
« less
VibTac: A High-Resolution High-Bandwidth Tactile Sensing Finger for Multi-Modal Perception in Robotic Manipulation
Tactile sensing is pivotal for enhancing robot manipulation abilities by providing crucial feedback for localized information. However, existing sensors often lack the necessary resolution and bandwidth required for intricate tasks. To address this gap, we introduce VibTac, a novel multi-modal tactile sensing finger designed to offer high-resolution and high-bandwidth tactile sensing simultaneously. VibTac seamlessly integrates vision-based and vibration-based tactile sensing modes to achieve high-resolution and high-bandwidth tactile sensing respectively, leveraging a streamlined human-inspired design for versatility in tasks. This paper outlines the key design elements of VibTac and its fabrication methods, highlighting the significance of the Elastomer Gel Pad (EGP) in its sensing mechanism. The sensor’s multi-modal performance is validated through 3D reconstruction and spectral analysis to discern tactile stimuli effectively. In experimental trials, VibTac demonstrates its efficacy by achieving over 90% accuracy in insertion tasks involving objects emitting distinct sounds, such as ethernet connectors. Leveraging vision-based tactile sensing for object localization and employing a deep learning model for “click” sound classification, VibTac showcases its robustness in real-world scenarios. Video of the sensor working can be accessed at https://youtu.be/kmKIUlXGroo.
more »
« less
- Award ID(s):
- 2423068
- PAR ID:
- 10599523
- Publisher / Repository:
- IEEE
- Date Published:
- Journal Name:
- IEEE Transactions on Haptics
- ISSN:
- 1939-1412
- Page Range / eLocation ID:
- 1 to 12
- Subject(s) / Keyword(s):
- tactile sensing, vision-based tactile, vibration-based tactile, manipulation
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
The most common sensing modalities found in a robot perception system are vision and touch, which together can provide global and highly localized data for manipulation. However, these sensing modalities often fail to adequately capture the behavior of target objects during the critical moments as they transition out of static, controlled contact with an end-effector to dynamic and uncontrolled motion. In this work, we present a novel multimodal visuotactile sensor that provides simultaneous visuotactile and proximity depth data. The sensor integrates an RGB camera and air pressure sensor to sense touch with an infrared time-of-flight (ToF) camera to sense proximity by leveraging a selectively transmissive soft membrane to enable the dual sensing modalities. We present the mechanical design, fabrication techniques, algorithm implementations, and evaluation of the sensor's tactile and proximity modalities. The sensor is demonstrated in three open-loop robotic tasks: approaching and contacting an object, catching, and throwing. The fusion of tactile and proximity data could be used to capture key information about a target object's transition behavior for sensor-based control in dynamic manipulation.more » « less
-
null (Ed.)Scalable, high-density electronic skins (e-skins) are a desirable goal of tactile sensing. However, a realization of this goal has been elusive due to the trade-off between spatial and temporal resolution that current tactile sensors suffer from. Additionally, as tactile sensing grids become large, wiring becomes unmanageable, and there is a need for a wireless approach. In this work, a scalable, event-based, passive tactilesensing system is proposed that is based on radio-frequency identification (RFID) technology. An RFID-based tactile sensing hand is developed with 19 pressure sensing taxels. The taxels are read wirelessly using a single ‘hand-shaped’ RFID antenna. Each RFID tag is transformed into a pressure sensor by disconnecting the RFID chip from its antenna and embedding the chip and antenna into soft elastomer with an air gap introduced between the RFID chip and its antenna. When a pressure event occurs, the RFID chip contacts its antenna and receives power and communicates with the RFID reader. Thus, the sensor is transformed into a biomimetic event-based sensor, whose response is activated only when used. Further, this work demonstrates the feasibility of constructing event-based, passive sensing grids that can be read wirelessly. Future tactile sensing e-skins can utilize this approach to become scalable and dense, while retaining high temporal resolution. Moreover, this approach can be applied beyond tactile sensing, for the development of scalable and high-density sensors of any modality.more » « less
-
Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects.more » « less
-
null (Ed.)The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we introduce a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property.more » « less
An official website of the United States government

