skip to main content


Title: A Multi-modal Sensor Array for Safe Human-Robot Interaction and Mapping
In the future, human-robot interaction will include collaboration in close-quarters where the environment geometry is partially unknown. As a means for enabling such interaction, this paper presents a multi-modal sensor array capable of contact detection and localization, force sensing, proximity sensing, and mapping. The sensor array integrates Hall effect and time-of-flight (ToF) sensors in an I2C communication network. The design, fabrication, and characterization of the sensor array for a future in-situ collaborative continuum robot are presented. Possible perception benefits of the sensor array are demonstrated for accidental contact detection, mapping of the environment, selection of admissible zones for bracing, and constrained motion control of the end effector while maintaining a bracing constraint with an admissible rolling motion.  more » « less
Award ID(s):
1734360
NSF-PAR ID:
10098422
Author(s) / Creator(s):
Date Published:
Journal Name:
IEEE Internation Conference on Robotics and Automation
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. In the future, human-robot interaction will include collaboration in close-quarters where the environment geometry is partially unknown. As a means for enabling such interaction, this paper presents a multi-modal sensor array capable of contact detection and localization, force sensing, proximity sensing, and mapping. The sensor array integrates Hall effect and time-of-flight (ToF) sensors in an I 2 C communication network. The design, fabrication, and characterization of the sensor array for a future in-situ collaborative continuum robot are presented. Possible perception benefits of the sensor array are demonstrated for accidental contact detection, mapping of the environment, selection of admissible zones for bracing, and constrained motion control of the end effector while maintaining a bracing constraint with an admissible rolling motion. 
    more » « less
  2. Tactile sensing is essential for robots to perceive and react to the environment. However, it remains a challenge to make large-scale and flexible tactile skins on robots. Industrial machine knitting provides solutions to manufacture customiz-able fabrics. Along with functional yarns, it can produce highly customizable circuits that can be made into tactile skins for robots. In this work, we present RobotSweater, a machine-knitted pressure-sensitive tactile skin that can be easily applied on robots. We design and fabricate a parameterized multi-layer tactile skin using off-the-shelf yarns, and characterize our sensor on both a flat testbed and a curved surface to show its robust contact detection, multi-contact localization, and pressure sensing capabilities. The sensor is fabricated using a well-established textile manufacturing process with a programmable industrial knitting machine, which makes it highly customizable and low-cost. The textile nature of the sensor also makes it easily fit curved surfaces of different robots and have a friendly appearance. Using our tactile skins, we conduct closed-loop control with tactile feedback for two applications: (1) human lead-through control of a robot arm, and (2) human-robot interaction with a mobile robot. 
    more » « less
  3. Detecting and localizing contacts is essential for robot manipulators to perform contact-rich tasks in unstructured environments. While robot skins can localize contacts on the surface of robot arms, these sensors are not yet robust or easily accessible. As such, prior works have explored using proprioceptive observations, such as joint velocities and torques, to perform contact localization. Many past approaches assume the robot is static during contact incident, a single contact is made at a time, or having access to accurate dynamics models and joint torque sensing. In this work, we relax these assumptions and propose using Domain Randomization to train a neural network to localize contacts of robot arms in motion without joint torque observations. Our method uses a novel cylindrical projection encoding of the robot arm surface, which allows the network to use convolution layers to process input features and transposed convolution layers to predict contacts. The trained network achieves a contact detection accuracy of 91.5% and a mean contact localization error of 3.0cm. We further demonstrate an application of the contact localization model in an obstacle mapping task, evaluated in both simulation and the real world. 
    more » « less
  4. null (Ed.)
    This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home. 
    more » « less
  5. Knowledge of 3-D object shape is of great importance to robot manipulation tasks, but may not be readily available in unstructured environments. While vision is often occluded during robot-object interaction, high-resolution tactile sensors can give a dense local perspective of the object. However, tactile sensors have limited sensing area and the shape representation must faithfully approximate non-contact areas. In addition, a key challenge is efficiently incorporating these dense tactile measurements into a 3-D mapping framework. In this work, we propose an incremental shape mapping method using a GelSight tactile sensor and a depth camera. Local shape is recovered from tactile images via a learned model trained in simulation. Through efficient inference on a spatial factor graph informed by a Gaussian process, we build an implicit surface representation of the object. We demonstrate visuo-tactile mapping in both simulated and real-world experiments, to incrementally build 3-D reconstructions of household objects. 
    more » « less