skip to main content


Title: A Social Robot for Anxiety Reduction via Deep Breathing
In this paper, we introduce Ommie, a novel robot that supports deep breathing practices for the purposes of anxiety reduction. The robot’s primary function is to guide users through a series of extended inhales, exhales, and holds by way of haptic interactions and audio cues. We present core design decisions during development, such as robot morphology and tactility, as well as the results of a usability study in collaboration with a local wellness center. Interacting with Ommie resulted in a significant reduction in STAI-6 anxiety measures, and participants found the robot intuitive, approachable, and engaging. Participants also reported feelings of focus and companionship when using the robot, often elicited by the haptic interaction. These results show promise in the robot’s capacity for supporting mental health.  more » « less
Award ID(s):
1955653 1928448 2106690 1813651
NSF-PAR ID:
10354175
Author(s) / Creator(s):
; ;
Date Published:
Journal Name:
2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN)
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract

    In the realm of robotics and automation, robot teleoperation, which facilitates human–machine interaction in distant or hazardous settings, has surged in significance. A persistent issue in this domain is the delays between command issuance and action execution, causing negative repercussions on operator situational awareness, performance, and cognitive load. These delays, particularly in long-distance operations, are difficult to mitigate even with the most advanced computing advancements. Current solutions mainly revolve around machine-based adjustments to combat these delays. However, a notable lacuna remains in harnessing human perceptions for an enhanced subjective teleoperation experience. This paper introduces a novel approach of sensory manipulation for induced human adaptation in delayed teleoperation. Drawing from motor learning and rehabilitation principles, it is posited that strategic sensory manipulation, via altered sensory stimuli, can mitigate the subjective feeling of these delays. The focus is not on introducing new skills or adapting to novel conditions; rather, it leverages prior motor coordination experience in the context of delays. The objective is to reduce the need for extensive training or sophisticated automation designs. A human-centered experiment involving 41 participants was conducted to examine the effects of modified haptic cues in teleoperations with delays. These cues were generated from high-fidelity physics engines using parameters from robot-end sensors or physics engine simulations. The results underscored several benefits, notably the considerable reduction in task time and enhanced user perceptions about visual delays. Real-time haptic feedback, or the anchoring method, emerged as a significant contributor to these benefits, showcasing reduced cognitive load, bolstered self-confidence, and minimized frustration. Beyond the prevalent methods of automation design and training, this research underscores induced human adaptation as a pivotal avenue in robot teleoperation. It seeks to enhance teleoperation efficacy through rapid human adaptation, offering insights beyond just optimizing robotic systems for delay compensations.

     
    more » « less
  2. Physical interaction between humans and robots can help robots learn to perform complex tasks. The robot arm gains information by observing how the human kinesthetically guides it throughout the task. While prior works focus on how the robot learns, it is equally important that this learning is transparent to the human teacher. Visual displays that show the robot’s uncertainty can potentially communicate this information; however, we hypothesize that visual feedback mechanisms miss out on the physical connection between the human and robot. In this work we present a soft haptic display that wraps around and conforms to the surface of a robot arm, adding a haptic signal at an existing point of contact without significantly affecting the interaction. We demonstrate how soft actuation creates a salient haptic signal while still allowing flexibility in device mounting. Using a psychophysics experiment, we show that users can accurately distinguish inflation levels of the wrapped display with an average Weber fraction of 11.4%. When we place the wrapped display around the arm of a robotic manipulator, users are able to interpret and leverage the haptic signal in sample robot learning tasks, improving identification of areas where the robot needs more training and enabling the user to provide better demonstrations. See videos of our device and user studies here: https://youtu.be/tX-2Tqeb9Nw 
    more » « less
  3. The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines. 
    more » « less
  4. The goal of this article is to enable robots to perform robust task execution following human instructions in partially observable environments. A robot’s ability to interpret and execute commands is fundamentally tied to its semantic world knowledge. Commonly, robots use exteroceptive sensors, such as cameras or LiDAR, to detect entities in the workspace and infer their visual properties and spatial relationships. However, semantic world properties are often visually imperceptible. We posit the use of non-exteroceptive modalities including physical proprioception, factual descriptions, and domain knowledge as mechanisms for inferring semantic properties of objects. We introduce a probabilistic model that fuses linguistic knowledge with visual and haptic observations into a cumulative belief over latent world attributes to infer the meaning of instructions and execute the instructed tasks in a manner robust to erroneous, noisy, or contradictory evidence. In addition, we provide a method that allows the robot to communicate knowledge dissonance back to the human as a means of correcting errors in the operator’s world model. Finally, we propose an efficient framework that anticipates possible linguistic interactions and infers the associated groundings for the current world state, thereby bootstrapping both language understanding and generation. We present experiments on manipulators for tasks that require inference over partially observed semantic properties, and evaluate our framework’s ability to exploit expressed information and knowledge bases to facilitate convergence, and generate statements to correct declared facts that were observed to be inconsistent with the robot’s estimate of object properties. 
    more » « less
  5. Current commercially available robotic minimally invasive surgery (RMIS) platforms provide no haptic feedback of tool interactions with the surgical environment. As a consequence, novice robotic surgeons must rely exclusively on visual feedback to sense their physical interactions with the surgical environment. This technical limitation can make it challenging and time-consuming to train novice surgeons to proficiency in RMIS. Extensive prior research has demonstrated that incorporating haptic feedback is effective at improving surgical training task performance. However, few studies have investigated the utility of providing feedback of multiple modalities of haptic feedback simultaneously (multi-modality haptic feedback) in this context, and these studies have presented mixed results regarding its efficacy. Furthermore, the inability to generalize and compare these mixed results has limited our ability to understand why they can vary significantly between studies. Therefore, we have developed a generalized, modular multi-modality haptic feedback and data acquisition framework leveraging the real-time data acquisition and streaming capabilities of the Robot Operating System (ROS). In our preliminary study using this system, participants complete a peg transfer task using a da Vinci robot while receiving haptic feedback of applied forces, contact accelerations, or both via custom wrist-worn haptic devices. Results highlight the capability of our system in running systematic comparisons between various single and dual-modality haptic feedback approaches. 
    more » « less