skip to main content


Title: Learning and Comfort in Human–Robot Interaction: A Review
Collaborative robots provide prospective and great solutions to human–robot cooperative tasks. In this paper, we present a comprehensive review for two significant topics in human–robot interaction: robots learning from demonstrations and human comfort. The collaboration quality between the human and the robot has been improved largely by taking advantage of robots learning from demonstrations. Human teaching and robot learning approaches with their corresponding applications are investigated in this review. We also discuss several important issues that need to be paid attention to and addressed in the human–robot teaching–learning process. After that, the factors that may affect human comfort in human–robot interaction are described and discussed. Moreover, the measures utilized to improve human acceptance of robots and human comfort in human–robot interaction are also presented and discussed.  more » « less
Award ID(s):
1845779
NSF-PAR ID:
10175670
Author(s) / Creator(s):
; ; ;
Date Published:
Journal Name:
Applied Sciences
Volume:
9
Issue:
23
ISSN:
2076-3417
Page Range / eLocation ID:
5152
Format(s):
Medium: X
Sponsoring Org:
National Science Foundation
More Like this
  1. Turn-taking is a fundamental behavior during human interactions and robots must be capable of turn-taking to interact with humans. Current state-of-the-art approaches in turn-taking focus on developing general models to predict the end of turn (EoT) across all contexts. This demands an all-inclusive verbal and non-verbal behavioral dataset from all possible contexts of interaction. Before robot deployment, gathering such a dataset may be infeasible and/or impractical. More importantly, a robot needs to predict the EoT and decide on the best time to take a turn (i.e, start speaking). In this research, we present a learning from demonstration (LfD) system for a robot to learn from demonstrations, after it has been deployed, to make decisions on the appropriate time for taking a turn within specific social interaction contexts. The system captures demonstrations of turn-taking during social interactions and uses these demonstrations to train a LSTM RNN based model to replicate the turn-taking behavior of the demonstrator. We evaluate the system for teaching the turn-taking behavior of an interviewer during a job interview context. Furthermore, we investigate the efficacy of verbal, prosodic, and gestural cues for deciding when to begin a turn. 
    more » « less
  2. Abstract

    Wearable robotics, also called exoskeletons, have been engineered for human-centered assistance for decades. They provide assistive technologies for maintaining and improving patients’ natural capabilities towards self-independence and also enable new therapy solutions for rehabilitation towards pervasive health. Upper limb exoskeletons can significantly enhance human manipulation with environments, which is crucial to patients’ independence, self-esteem, and quality of life. For long-term use in both in-hospital and at-home settings, there are still needs for new technologies with high comfort, biocompatibility, and operability. The recent progress in soft robotics has initiated soft exoskeletons (also called exosuits), which are based on controllable and compliant materials and structures. Remarkable literature reviews have been performed for rigid exoskeletons ranging from robot design to different practical applications. Due to the emerging state, few have been focused on soft upper limb exoskeletons. This paper aims to provide a systematic review of the recent progress in wearable upper limb robotics including both rigid and soft exoskeletons with a focus on their designs and applications in various pervasive healthcare settings. The technical needs for wearable robots are carefully reviewed and the assistance and rehabilitation that can be enhanced by wearable robotics are particularly discussed. The knowledge from rigid wearable robots may provide practical experience and inspire new ideas for soft exoskeleton designs. We also discuss the challenges and opportunities of wearable assistive robotics for pervasive health.

     
    more » « less
  3. Teachable agents are pedagogical agents that employ the ‘learning-by-teaching’ strategy, which facilitates learning by encouraging students to construct explanations, reflect on misconceptions, and elaborate on what they know. Teachable agents present unique opportunities to maximize the benefits of a ‘learning-by-teaching’ experience. For example, teachable agents can provide socio-emotional support to learners, influencing learner self-efficacy and motivation, and increasing learning. Prior work has found that a teachable agent which engages learners socially through social dialogue and paraverbal adaptation on pitch can have positive effects on rapport and learning. In this work, we introduce Emma, a teachable robotic agent that can speak socially and adapt on both pitch and loudness. Based on the phenomenon of entrainment, multi-feature adaptation on tone and loudness has been found in human-human interactions to be highly correlated to learning and social engagement. In a study with 48 middle school participants, we performed a novel exploration of how multi-feature adaptation can influence learner rapport and learning as an independent social behavior and combined with social dialogue. We found significantly more rapport for Emma when the robot both adapted and spoke socially than when Emma only adapted and indications of a similar trend for learning. Additionally, it appears that an individual’s initial comfort level with robots may influence how they respond to such behavior, suggesting that for individuals who are more comfortable interacting with robots, social behavior may have a more positive influence. 
    more » « less
  4. Teachable agents are pedagogical agents that employ the 'learning-by-teaching' strategy, which facilitates learning by encouraging students to construct explanations, reflect on misconceptions, and elaborate on what they know. Teachable agents present unique opportunities to maximize the benefits of a 'learning-by-teaching' experience. For example, teachable agents can provide socio-emotional support to learners, influencing learner self-efficacy and motivation, and increasing learning. Prior work has found that a teachable agent which engages learners socially through social dialogue and paraverbal adaptation on pitch can have positive effects on rapport and learning. In this work, we introduce Emma, a teachable robotic agent that can speak socially and adapt on both pitch and loudness. Based on the phenomenon of entrainment, multi-feature adaptation on tone and loudness has been found in human-human interactions to be highly correlated to learning and social engagement. In a study with 48 middle school participants, we performed a novel exploration of how multi-feature adaptation can influence learner rapport and learning as an independent social behavior and combined with social dialogue. We found significantly more rapport for Emma when the robot both adapted and spoke socially than when Emma only adapted and indications of a similar trend for learning. Additionally, it appears that an individual’s initial comfort level with robots may influence how they respond to such behavior, suggesting that for individuals who are more comfortable interacting with robots, social behavior may have a more positive influence. 
    more » « less
  5. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unified Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN. 
    more » « less