skip to main content

Title: Perceived Social Intelligence as Evaluation of Socially Navigation
As Human-Robot Interaction becomes more sophisticated, measuring the performance of a social robot is crucial to gauging the effectiveness of its behavior. However, social behavior does not necessarily have strict performance metrics that other autonomous behavior can have. Indeed, when considering robot navigation, a socially-appropriate action may be one that is sub-optimal, resulting in longer paths, longer times to get to a goal. Instead, we can rely on subjective assessments of the robot's social performance by a participant in a robot interaction or by a bystander. In this paper, we use the newly-validated Perceived Social Intelligence (PSI) scale to examine the perception of non-humanoid robots in non-verbal social scenarios. We show that there are significant differences between the perceived social intelligence of robots exhibiting SAN behavior compared to one using a traditional navigation planner in scenarios such as waiting in a queue and group behavior.
Authors:
; ;
Award ID(s):
1719027 1757929
Publication Date:
NSF-PAR ID:
10291868
Journal Name:
HRI '21 Companion: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction
Page Range or eLocation-ID:
519 to 523
Sponsoring Org:
National Science Foundation
More Like this
  1. Robotic social intelligence is increasingly important. However, measures of human social intelligence omit basic skills, and robot-specific scales do not focus on social intelligence. We combined human robot interaction concepts of beliefs, desires, and intentions with psychology concepts of behaviors, cognitions, and emotions to create 20 Perceived Social Intelligence (PSI) Scales to comprehensively measure perceptions of robots with a wide range of embodiments and behaviors. Participants rated humanoid and non-humanoid robots interacting with people in five videos. Each scale had one factor and high internal consistency, indicating each measures a coherent construct. Scales capturing perceived social information processing skills (appearing to recognize, adapt to, and predict behaviors, cognitions, and emotions) and scales capturing perceived skills for identifying people (appearing to identify humans, individuals, and groups) correlated strongly with social competence and constituted the Mind and Behavior factors. Social presentation scales (appearing friendly, caring, helpful, trustworthy, and not rude, conceited, or hostile) relate more to Social Response to Robots Scales and Godspeed Indices, form a separate factor, and predict positive feelings about robots and wanting social interaction with them. For a comprehensive measure, researchers can use all PSI 20 scales for free. Alternatively, they can select the most relevant scales formore »their projects.« less
  2. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Human environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unifiedmore »Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN.« less
  3. Mobile robots are increasingly populating homes, hospitals, shopping malls, factory floors, and other human environments. Human society has social norms that people mutually accept; obeying these norms is an essential signal that someone is participating socially with respect to the rest of the population. For robots to be socially compatible with humans, it is crucial for robots to obey these social norms. In prior work, we demonstrated a Socially-Aware Navigation (SAN) planner, based on Pareto Concavity Elimination Transformation (PaCcET), in a hallway scenario, optimizing two objectives so the robot does not invade the personal space of people. This article extends our PaCcET-based SAN planner to multiple scenarios with more than two objectives. We modified the Robot Operating System’s (ROS) navigation stack to include PaCcET in the local planning task. We show that our approach can accommodate multiple Human-Robot Interaction (HRI) scenarios. Using the proposed approach, we achieved successful HRI in multiple scenarios such as hallway interactions, an art gallery, waiting in a queue, and interacting with a group. We implemented our method on a simulated PR2 robot in a 2D simulator (Stage) and a pioneer-3DX mobile robot in the real-world to validate all the scenarios. A comprehensive set of experimentsmore »shows that our approach can handle multiple interaction scenarios on both holonomic and non-holonomic robots; hence, it can be a viable option for a Unified Socially-Aware Navigation (USAN).« less
  4. Research in creative robotics continues to expand across all creative domains, including art, music and language. Creative robots are primarily designed to be task specific, with limited research into the implications of their design outside their core task. In the case of a musical robot, this includes when a human sees and interacts with the robot before and after the performance, as well as in between pieces. These non-musical interaction tasks such as the presence of a robot during musical equipment set up, play a key role in the human perception of the robot however have received only limited attention. In this paper, we describe a new audio system using emotional musical prosody, designed to match the creative process of a musical robot for use before, between and after musical performances. Our generation system relies on the creation of a custom dataset for musical prosody. This system is designed foremost to operate in real time and allow rapid generation and dialogue exchange between human and robot. For this reason, the system combines symbolic deep learning through a Conditional Convolution Variational Auto-encoder, with an emotion-tagged audio sampler. We then compare this to a SOTA text-to-speech system in our robotic platform, Shimonmore »the marimba player.We conducted a between-groups study with 100 participants watching a musician interact for 30 s with Shimon. We were able to increase user ratings for the key creativity metrics; novelty and coherence, while maintaining ratings for expressivity across each implementation. Our results also indicated that by communicating in a form that relates to the robot’s core functionality, we can raise likeability and perceived intelligence, while not altering animacy or anthropomorphism. These findings indicate the variation that can occur in the perception of a robot based on interactions surrounding a performance, such as initial meetings and spaces between pieces, in addition to the core creative algorithms.« less
  5. The human-robot interaction community has developed many methods for robots to navigate safely and socially alongside humans. However, experimental procedures to evaluate these works are usually constructed on a per-method basis. Such disparate evaluations make it difficult to compare the performance of such methods across the literature. To bridge this gap, we introduce SocNavBench , a simulation framework for evaluating social navigation algorithms. SocNavBench comprises a simulator with photo-realistic capabilities and curated social navigation scenarios grounded in real-world pedestrian data. We also provide an implementation of a suite of metrics to quantify the performance of navigation algorithms on these scenarios. Altogether, SocNavBench provides a test framework for evaluating disparate social navigation methods in a consistent and interpretable manner. To illustrate its use, we demonstrate testing three existing social navigation methods and a baseline method on SocNavBench , showing how the suite of metrics helps infer their performance trade-offs. Our code is open-source, allowing the addition of new scenarios and metrics by the community to help evolve SocNavBench to reflect advancements in our understanding of social navigation.