skip to main content

Title: Learning and Comfort in Human–Robot Interaction: A Review
Collaborative robots provide prospective and great solutions to human–robot cooperative tasks. In this paper, we present a comprehensive review for two significant topics in human–robot interaction: robots learning from demonstrations and human comfort. The collaboration quality between the human and the robot has been improved largely by taking advantage of robots learning from demonstrations. Human teaching and robot learning approaches with their corresponding applications are investigated in this review. We also discuss several important issues that need to be paid attention to and addressed in the human–robot teaching–learning process. After that, the factors that may affect human comfort in human–robot interaction are described and discussed. Moreover, the measures utilized to improve human acceptance of robots and human comfort in human–robot interaction are also presented and discussed.
Authors:
; ; ;
Award ID(s):
1845779
Publication Date:
NSF-PAR ID:
10175670
Journal Name:
Applied Sciences
Volume:
9
Issue:
23
Page Range or eLocation-ID:
5152
ISSN:
2076-3417
Sponsoring Org:
National Science Foundation
More Like this
  1. Teachable agents are pedagogical agents that employ the ‘learning-by-teaching’ strategy, which facilitates learning by encouraging students to construct explanations, reflect on misconceptions, and elaborate on what they know. Teachable agents present unique opportunities to maximize the benefits of a ‘learning-by-teaching’ experience. For example, teachable agents can provide socio-emotional support to learners, influencing learner self-efficacy and motivation, and increasing learning. Prior work has found that a teachable agent which engages learners socially through social dialogue and paraverbal adaptation on pitch can have positive effects on rapport and learning. In this work, we introduce Emma, a teachable robotic agent that can speak socially and adapt on both pitch and loudness. Based on the phenomenon of entrainment, multi-feature adaptation on tone and loudness has been found in human-human interactions to be highly correlated to learning and social engagement. In a study with 48 middle school participants, we performed a novel exploration of how multi-feature adaptation can influence learner rapport and learning as an independent social behavior and combined with social dialogue. We found significantly more rapport for Emma when the robot both adapted and spoke socially than when Emma only adapted and indications of a similar trend for learning. Additionally, it appears thatmore »an individual’s initial comfort level with robots may influence how they respond to such behavior, suggesting that for individuals who are more comfortable interacting with robots, social behavior may have a more positive influence.« less
  2. Teachable agents are pedagogical agents that employ the 'learning-by-teaching' strategy, which facilitates learning by encouraging students to construct explanations, reflect on misconceptions, and elaborate on what they know. Teachable agents present unique opportunities to maximize the benefits of a 'learning-by-teaching' experience. For example, teachable agents can provide socio-emotional support to learners, influencing learner self-efficacy and motivation, and increasing learning. Prior work has found that a teachable agent which engages learners socially through social dialogue and paraverbal adaptation on pitch can have positive effects on rapport and learning. In this work, we introduce Emma, a teachable robotic agent that can speak socially and adapt on both pitch and loudness. Based on the phenomenon of entrainment, multi-feature adaptation on tone and loudness has been found in human-human interactions to be highly correlated to learning and social engagement. In a study with 48 middle school participants, we performed a novel exploration of how multi-feature adaptation can influence learner rapport and learning as an independent social behavior and combined with social dialogue. We found significantly more rapport for Emma when the robot both adapted and spoke socially than when Emma only adapted and indications of a similar trend for learning. Additionally, it appears thatmore »an individual’s initial comfort level with robots may influence how they respond to such behavior, suggesting that for individuals who are more comfortable interacting with robots, social behavior may have a more positive influence.« less
  3. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unifiedmore »Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN.« less
  4. Nonverbal interactions are a key component of human communication. Since robots have become significant by trying to get close to human beings, it is important that they follow social rules governing the use of space. Prior research has conceptualized personal space as physical zones which are based on static distances. This work examined how preferred interaction distance can change given different interaction scenarios. We conducted a user study using three different robot heights. We also examined the difference in preferred interaction distance when a robot approaches a human and, conversely, when a human approaches a robot. Factors included in quantitative analysis are the participants' gender, robot's height, and method of approach. Subjective measures included human comfort and perceived safety. The results obtained through this study shows that robot height, participant gender and method of approach were significant factors influencing measured proxemic zones and accordingly participant comfort. Subjective data showed that experiment respondents regarded robots in a more favorable light following their participation in this study. Furthermore, the NAO was perceived most positively by respondents according to various metrics and the PR2 Tall, most negatively.
  5. Learning companion robots can provide personalized learning interactions to engage students in many domains including STEM. For successful interactions, students must feel comfortable and engaged. We describe an experiment with a learning companion robot acting as a teachable robot; based on human-to-human peer tutoring, students teach the robot how to solve math problems. We compare student attitudes of comfort, attention, engagement, motivation, and physical proximity for two dyadic stance formations: a face-to-face stance and a side-by-side stance. In human-robot interaction experiments, it is common for dyads to assume a face-to-face stance, while in human-to-human peer tutoring, it is common for dyads to sit in side-by-side as well as face-to-face formations. We find that students in the face-to-face stance report stronger feelings of comfort and attention, compared to students in the side-by-side stance. We find no difference between stances for feelings of engagement, motivation, and physical proximity.