skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


This content will become publicly available on July 31, 2026

Title: Human-Robot Communication: Utilizing Light-Based Signals to Convey Robot Operating State
The field of human-robot interaction has been rapidly expanding but an ever-present obstacle facing this field is developing accessible, reliable, and effective forms of communication. It is often imperative to the efficacy of the robot and the overall human-robot interaction that a robot be capable of expressing information about itself to humans in the environment. Amidst the evolving approaches to this obstacle is the use of light as a communication modality. Light-based communication effectively captures attention, can be seen at a distance, and is commonly utilized in our daily lives. Our team explored the ways light-based signals on robots are being used to improve human understanding of robot operating state. In other words, we sought to determine how light-based signals are being used to help individuals identify the conditions (e.g., capabilities, goals, needs) that comprise and dictate a robot’s current functionality. We identified four operating states (e.g., “Blocked”, “Error”, “Seeking Interaction”, “Not Seeking Interaction”) in which light is utilized to increase individuals’ understanding of the robot’s operations. These operating states are expressed through manipulation of three visual dimensions of the onboard lighting features of robots (e.g., color, pattern of lighting, frequency of pattern). In our work, we outline how these dimensions vary across operating states and the effect they have on human understanding. We also provide potential explanations for the importance of each dimension. Additionally, we discuss the main shortcomings of this technology. The first is the overlapping use of combinations of dimensions across operating states. The remainder relate to the difficulties of leveraging color to convey information. Finally, we provide considerations on how this technology might be improved going into the future through the standardization of light-based signals and increasing the amount of information provided within interactions between agents.  more » « less
Award ID(s):
2431636
PAR ID:
10652685
Author(s) / Creator(s):
; ; ; ; ; ;
Publisher / Repository:
AHFE Open Access
Date Published:
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. For robots to seamlessly interact with humans, we first need to make sure that humans and robots understand one another. Diverse algorithms have been developed to enable robots to learn from humans (i.e., transferring information from humans to robots). In parallel, visual, haptic, and auditory communication interfaces have been designed to convey the robot’s internal state to the human (i.e., transferring information from robots to humans). Prior research often separates these two directions of information transfer, and focuses primarily on either learning algorithms or communication interfaces. By contrast, in this survey we take an interdisciplinary approach to identify common themes and emerging trends that close the loop between learning and communication. Specifically, we survey state-of-the-art methods and outcomes for communicating a robot’s learning back to the human teacher during human-robot interaction. This discussion connects human-in-the-loop learning methods and explainable robot learning with multimodal feedback systems and measures of human-robot interaction. We find that—when learning and communication are developed together—the resulting closed-loop system can lead to improved human teaching, increased human trust, and human-robot co-adaptation. The paper includes a perspective on several of the interdisciplinary research themes and open questions that could advance how future robots communicate their learning to everyday operators. Finally, we implement a selection of the reviewed methods in a case study where participants kinesthetically teach a robot arm. This case study documents and tests an integrated approach for learning in ways that can be communicated, conveying this learning across multimodal interfaces, and measuring the resulting changes in human and robot behavior. 
    more » « less
  2. Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness of three motion-based and three light-based intent signals as well as the overall level of comfort participants felt while working with the robot to sort colored blocks on the tabletop. Although not significant, our findings suggest that the light signal located closest to the workspace—an LED bracelet located closest to the robot’s end effector—was the most noticeable and least confusing to participants. These findings can be leveraged to support human–robot collaborations in shared spaces. 
    more » « less
  3. Work in Human–Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human–robot group interactions. Yet the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this article, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of interaction-shaping robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human–robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR. 
    more » « less
  4. Recent advances in construction automation increased the need for cooperation between workers and robots, where workers have to face both success and failure in human-robot collaborative work, ultimately affecting their trust in robots. This study simulated a worker-robot bricklaying collaborative task to examine the impacts of blame targets (responsibility attributions) on trust and trust transfer in multi-robots-human interaction. The findings showed that workers’ responsibility attributions to themselves or robots significantly affect their trust in the robot. Further, in a multi-robots-human interaction, observing one robot’s failure to complete the task will affect the trust in the other devices, aka., trust transfer. 
    more » « less
  5. null (Ed.)
    This paper proposes and evaluates the use of image classification for detailed, full-body human-robot tactile interaction. A camera positioned below a translucent robot skin captures shadows generated from human touch and infers social gestures from the captured images. This approach enables rich tactile interaction with robots without the need for the sensor arrays used in traditional social robot tactile skins. It also supports the use of touch interaction with non-rigid robots, achieves high-resolution sensing for robots with different sizes and shape of surfaces, and removes the requirement of direct contact with the robot. We demonstrate the idea with an inflatable robot and a standing-alone testing device, an algorithm for recognizing touch gestures from shadows that uses Densely Connected Convolutional Networks, and an algorithm for tracking positions of touch and hovering shadows. Our experiments show that the system can distinguish between six touch gestures under three lighting conditions with 87.5 - 96.0% accuracy, depending on the lighting, and can accurately track touch positions as well as infer motion activities in realistic interaction conditions. Additional applications for this method include interactive screens on inflatable robots and privacy-maintaining robots for the home. 
    more » « less