skip to main content


Title: JESSIE: Synthesizing Social Robot Behaviors for Personalized Neurorehabilitation and Beyond
JESSIE is a robotic system that enables novice programmers to program social robots by expressing high-level specifications. We employ control synthesis with a tangible front-end to allow users to define complex behavior for which we automatically generate control code. We demonstrate JESSIE in the context of enabling clinicians to create personalized treatments for people with mild cognitive impairment (MCI) on a Kuri robot, in little time and without error. We evaluated JESSIE with neuropsychologists who reported high usability and learnability. They gave suggestions for improvement, including increased support for personalization, multi-party programming, collaborative goal setting, and re-tasking robot role post-deployment, which each raise technical and sociotechnical issues in HRI. We exhibit JESSIE's reproducibility by replicating a clinician-created program on a TurtleBot~2. As an open-source means of accessing control synthesis, JESSIE supports reproducibility, scalability, and accessibility of personalized robots for HRI.  more » « less
Award ID(s):
1915734 1935500
NSF-PAR ID:
10173289
Author(s) / Creator(s):
; ; ; ;
Date Published:
Journal Name:
Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
Page Range / eLocation ID:
121 to 130
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. The proliferation of Large Language Models (LLMs) presents both a critical design challenge and a remarkable opportunity for the field of Human-Robot Interaction (HRI). While the direct deployment of LLMs on interactive robots may be unsuitable for reasons of ethics, safety, and control, LLMs might nevertheless provide a promising baseline technique for many elements of HRI. Specifically, in this position paper, we argue for the use of LLMs as Scarecrows: ‘brainless,’ straw-man black-box modules integrated into robot architectures for the purpose of quickly enabling full-pipeline solutions, much like the use of “Wizard of Oz” (WoZ) and other human-in-the-loop approaches. We explicitly acknowledge that these Scarecrows, rather than providing a satisfying or scientifically complete solution, incorporate a form of the wisdom of the crowd, and, in at least some cases, will ultimately need to be replaced or supplemented by a robust and theoretically motivated solution. We provide examples of how Scarecrows could be used in language-capable robot architectures as useful placeholders, and suggest initial reporting guidelines for authors, mirroring existing guidelines for the use and reporting of WoZ techniques. 
    more » « less
  2. Many robot-delivered health interventions aim to support people longitudinally at home to complement or replace in-clinic treat- ments. However, there is little guidance on how robots can support collaborative goal setting (CGS). CGS is the process in which a person works with a clinician to set and modify their goals for care; it can improve treatment adherence and efficacy. However, for home-deployed robots, clinicians will have limited availability to help set and modify goals over time, which necessitates that robots support CGS on their own. In this work, we explore how robots can facilitate CGS in the context of our robot CARMEN (Cognitively Assistive Robot for Motivation and Neurorehabilitation), which delivers neurorehabilitation to people with mild cognitive impairment (PwMCI). We co-designed robot behaviors for supporting CGS with clinical neuropsychologists and PwMCI, and prototyped them on CARMEN. We present feedback on how PwMCI envision these behaviors supporting goal progress and motivation during an intervention. We report insights on how to support this process with home-deployed robots and propose a framework to support HRI researchers interested in exploring this both in the context of cognitively assistive robots and beyond. This work supports design- ing and implementing CGS on robots, which will ultimately extend the efficacy of robot-delivered health interventions. 
    more » « less
  3. Abstract

    Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.

     
    more » « less
  4. Pedestrian regulation can prevent crowd accidents and improve crowd safety in densely populated areas. Recent studies use mobile robots to regulate pedestrian flows for desired collective motion through the effect of passive human-robot interaction (HRI). This paper formulates a robot motion planning problem for the optimization of two merging pedestrian flows moving through a bottleneck exit. To address the challenge of feature representation of complex human motion dynamics under the effect of HRI, we propose using a deep neural network to model the mapping from the image input of pedestrian environments to the output of robot motion decisions. The robot motion planner is trained end-to-end using a deep reinforcement learning algorithm, which avoids hand-crafted feature detection and extraction, thus improving the learning capability for complex dynamic problems. Our proposed approach is validated in simulated experiments, and its performance is evaluated. The results demonstrate that the robot is able to find optimal motion decisions that maximize the pedestrian outflow in different flow conditions, and the pedestrian-accumulated outflow increases significantly compared to cases without robot regulation and with random robot motion. 
    more » « less
  5. null (Ed.)
    Mobile robots are increasingly populating homes, hospitals, shopping malls, factory floors, and other human environments. Human society has social norms that people mutually accept; obeying these norms is an essential signal that someone is participating socially with respect to the rest of the population. For robots to be socially compatible with humans, it is crucial for robots to obey these social norms. In prior work, we demonstrated a Socially-Aware Navigation (SAN) planner, based on Pareto Concavity Elimination Transformation (PaCcET), in a hallway scenario, optimizing two objectives so the robot does not invade the personal space of people. This article extends our PaCcET-based SAN planner to multiple scenarios with more than two objectives. We modified the Robot Operating System’s (ROS) navigation stack to include PaCcET in the local planning task. We show that our approach can accommodate multiple Human-Robot Interaction (HRI) scenarios. Using the proposed approach, we achieved successful HRI in multiple scenarios such as hallway interactions, an art gallery, waiting in a queue, and interacting with a group. We implemented our method on a simulated PR2 robot in a 2D simulator (Stage) and a pioneer-3DX mobile robot in the real-world to validate all the scenarios. A comprehensive set of experiments shows that our approach can handle multiple interaction scenarios on both holonomic and non-holonomic robots; hence, it can be a viable option for a Unified Socially-Aware Navigation (USAN). 
    more » « less