skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

Attention:

The NSF Public Access Repository (PAR) system and access will be unavailable from 10:00 PM to 12:00 PM ET on Tuesday, March 25 due to maintenance. We apologize for the inconvenience.


Title: ShadowSense: Detecting Human Touch in a Social Robot Using Shadow Image Classification
This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home.  more » « less
Award ID(s):
1830471
PAR ID:
10291126
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
Volume:
4
Issue:
4
ISSN:
2474-9567
Page Range / eLocation ID:
1 to 24
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Social touch provides a rich non-verbal communication channel between humans and robots. Prior work has identified a set of touch gestures for human-robot interaction and described them with natural language labels (e.g., stroking, patting). Yet, no data exists on the semantic relationships between the touch gestures in users’ minds. To endow robots with touch intelligence, we investigated how people perceive the similarities of social touch labels from the literature. In an online study, 45 participants grouped 36 social touch labels based on their perceived similarities and annotated their groupings with descriptive names. We derived quantitative similarities of the gestures from these groupings and analyzed the similarities using hierarchical clustering. The analysis resulted in 9 clusters of touch gestures formed around the social, emotional, and contact characteristics of the gestures. We discuss the implications of our results for designing and evaluating touch sensing and interactions with social robots. 
    more » « less
  2. Abstract Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI. 
    more » « less
  3. Abstract Enhancing physical human-robot interaction requires the improvement in the tactile perception of physical touch. Robot skin sensors exhibiting piezoresistive behavior can be used in conjunction with collaborative robots. In past work, fabrication of these tactile arrays was done using cleanroom techniques such as spin coating, photolithography, sputtering, wet and dry etching onto flexible polymers. In this paper, we present an addictive, non-cleanroom improved process of depositing PEDOT: PSS, which is the organic polymer responsible for the piezoresistive phenomenon of the robot skin sensor arrays. This publication details the patterning of the robot skin sensor structures and the adaptation of the inkjet printing technology to the fabrication process. This increases the possibility of scaling the production output while reducing the cleanroom fabrication cost and time from an approximately five-hour PEDOT: PSS deposition process to five minutes. Furthermore, the testing of these skin sensor arrays is carried out on a testing station equipped with a force plunger and an integrated circuit designed to provide perception feedback on various force load profiles controlled in an automated process. The results show uniform deposition of the PEDOT: PSS, consistent resistance measurement, and appropriate tactile response across an array of 16 sensors. 
    more » « less
  4. Advanced applications for human-robot interaction require perception of physical touch in a manner that imitates the human tactile perception. Feedback generated from tactile sensor arrays can be used to control the interaction of a robot with their environment and other humans. In this paper, we present our efforts to fabricate piezoresistive organic polymer sensor arrays using PEDOT: PSS or poly (3,4-ethylenedioxythiophene)-poly(styrenesulfonate). Sensors are realized as strain-gauges on Kapton substrates with thermal and electrical response characteristics to human touch. In this paper, we detail fabrication processes associated with a Gold etching technique combined with a wet lift-off photolithographic process to implement a circular tree designed sensor microstructure in our cleanroom. The testing of this microstructure is done on a load testing apparatus facilitated by an integrated circuit design. Furthermore, a lamination process is employed to compensate for temperature drift while measuring pressure for double-sided sensor substrates. Experiments carried out to evaluate the performance of the fabricated structure, indicates 100% sensor yields with the updated technique implemented. 
    more » « less
  5. Mixed Reality provides a powerful medium for transparent and effective human-robot communication, especially for robots with significant physical limitations (e.g., those without arms). To enhance nonverbal capabilities for armless robots, this article presents two studies that explore two different categories of mixed reality deictic gestures for armless robots: a virtual arrow positioned over a target referent (a non-ego-sensitive allocentric gesture) and a virtual arm positioned over the gesturing robot (an ego-sensitive allocentric gesture). In Study 1, we explore the tradeoffs between these two types of gestures with respect to both objective performance and subjective social perceptions. Our results show fundamentally different task-oriented versus social benefits, with non-ego-sensitive allocentric gestures enabling faster reaction time and higher accuracy, but ego-sensitive gestures enabling higher perceived social presence, anthropomorphism, and likability. In Study 2, we refine our design recommendations by showing that in fact these different gestures should not be viewed as mutually exclusive alternatives, and that by using them together, robots can achieve both task-oriented and social benefits. 
    more » « less