skip to main content


Title: A Social Robot for Anxiety Reduction via Deep Breathing
In this paper, we introduce Ommie, a novel robot that supports deep breathing practices for the purposes of anxiety reduction. The robot’s primary function is to guide users through a series of extended inhales, exhales, and holds by way of haptic interactions and audio cues. We present core design decisions during development, such as robot morphology and tactility, as well as the results of a usability study in collaboration with a local wellness center. Interacting with Ommie resulted in a significant reduction in STAI-6 anxiety measures, and participants found the robot intuitive, approachable, and engaging. Participants also reported feelings of focus and companionship when using the robot, often elicited by the haptic interaction. These results show promise in the robot’s capacity for supporting mental health.  more » « less
Award ID(s):
1955653 1928448 2106690 1813651
NSF-PAR ID:
10354175
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  2. This article examines how people respond to robot-administered verbal and physical punishments. Human participants were tasked with sorting colored chips under time pressure and were punished by a robot when they made mistakes, such as inaccurate sorting or sorting too slowly. Participants were either punished verbally by being told to stop sorting for a fixed time, or physically, by restraining their ability to sort with an in-house crafted robotic exoskeleton. Either a human experimenter or the robot exoskeleton administered punishments, with participant task performance and subjective perceptions of their interaction with the robot recorded. The results indicate that participants made more mistakes on the task when under the threat of robot-administered punishment. Participants also tended to comply with robot-administered punishments at a lesser rate than human-administered punishments, which suggests that humans may not afford a robot the social authority to administer punishments. This study also contributes to our understanding of compliance with a robot and whether people accept a robot’s authority to punish. The results may influence the design of robots placed in authoritative roles and promote discussion of the ethical ramifications of robot-administered punishment. 
    more » « less
  3. null (Ed.)
    In this experiment, we investigated how a robot’s violation of several social norms influences human engagement with and perception of that robot. Each participant in our study (n = 80) played 30 rounds of rock-paper-scissors with a robot. In the three experimental conditions, the robot violated a social norm by cheating, cursing, or insulting the participant during gameplay. In the control condition, the robot conducted a non-norm violating behavior by stretching its hand. During the game, we found that participants had strong emotional reactions to all three social norm violations. However, participants spoke more words to the robot only after it cheated. After the game, participants were more likely to describe the robot as an agent only if they were in the cheating condition. These results imply that while social norm violations do elicit strong immediate reactions, only cheating elicits a significantly stronger prolonged perception of agency. 
    more » « less
  4. Human–robot collaboration is becoming increasingly common in factories around the world; accordingly, we need to improve the interaction experiences between humans and robots working in these spaces. In this article, we report on a user study that investigated methods for providing information to a person about a robot’s intent to move when working together in a shared workspace through signals provided by the robot. In this case, the workspace was the surface of a tabletop. Our study tested the effectiveness of three motion-based and three light-based intent signals as well as the overall level of comfort participants felt while working with the robot to sort colored blocks on the tabletop. Although not significant, our findings suggest that the light signal located closest to the workspace—an LED bracelet located closest to the robot’s end effector—was the most noticeable and least confusing to participants. These findings can be leveraged to support human–robot collaborations in shared spaces. 
    more » « less
  5. Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches. 
    more » « less