skip to main content

This content will become publicly available on October 1, 2022

Title: Theoretical Considerations for Social Learning between a Human Observer and a Robot Model
Robots are entering various domains of human societies, potentially unfolding more opportunities for people to perceive robots as social agents. We expect that having robots in proximity would create unique social learning situations where humans spontaneously observe and imitate robots’ behaviors. At times, these occurrences of humans’ imitating robot behaviors may result in a spread of unsafe or unethical behaviors among humans. For responsible robot designing, therefore, we argue that it is essential to understand physical and psychological triggers of social learning in robot design. Grounded in the existing literature of social learning and the uncanny valley theories, we discuss the human-likeness of robot appearance and affective responses associated with robot appearance as likely factors that either facilitate or deter social learning. We propose practical considerations for social learning and robot design.
Award ID(s):
Publication Date:
Journal Name:
Proceedings of the Human Factors and Ergonomics Society Annual Meeting
Sponsoring Org:
National Science Foundation
More Like this
  1. Explainability has emerged as a critical AI research objective, but the breadth of proposed methods and application domains suggest that criteria for explanation vary greatly. In particular, what counts as a good explanation, and what kinds of explanation are computationally feasible, has become trickier in light of oqaque “black box” systems such as deep neural networks. Explanation in such cases has drifted from what many philosophers stipulated as having to involve deductive and causal principles to mere “interpretation,” which approximates what happened in the target system to varying degrees. However, such post hoc constructed rationalizations are highly problematic for socialmore »robots that operate interactively in spaces shared with humans. For in such social contexts, explanations of behavior, and, in particular, justifications for violations of expected behavior, should make reference to socially accepted principles and norms. In this article, we show how a social robot’s actions can face explanatory demands for how it came to act on its decision, what goals, tasks, or purposes its design had those actions pursue and what norms or social constraints the system recognizes in the course of its action. As a result, we argue that explanations for social robots will need to be accurate representations of the system’s operation along causal, purposive, and justificatory lines. These explanations will need to generate appropriate references to principles and norms—explanations based on mere “interpretability” will ultimately fail to connect the robot’s behaviors to its appropriate determinants. We then lay out the foundations for a cognitive robotic architecture for HRI, together with particular component algorithms, for generating explanations and engaging in justificatory dialogues with human interactants. Such explanations track the robot’s actual decision-making and behavior, which themselves are determined by normative principles the robot can describe and use for justifications.« less
  2. Robotic social intelligence is increasingly important. However, measures of human social intelligence omit basic skills, and robot-specific scales do not focus on social intelligence. We combined human robot interaction concepts of beliefs, desires, and intentions with psychology concepts of behaviors, cognitions, and emotions to create 20 Perceived Social Intelligence (PSI) Scales to comprehensively measure perceptions of robots with a wide range of embodiments and behaviors. Participants rated humanoid and non-humanoid robots interacting with people in five videos. Each scale had one factor and high internal consistency, indicating each measures a coherent construct. Scales capturing perceived social information processing skills (appearingmore »to recognize, adapt to, and predict behaviors, cognitions, and emotions) and scales capturing perceived skills for identifying people (appearing to identify humans, individuals, and groups) correlated strongly with social competence and constituted the Mind and Behavior factors. Social presentation scales (appearing friendly, caring, helpful, trustworthy, and not rude, conceited, or hostile) relate more to Social Response to Robots Scales and Godspeed Indices, form a separate factor, and predict positive feelings about robots and wanting social interaction with them. For a comprehensive measure, researchers can use all PSI 20 scales for free. Alternatively, they can select the most relevant scales for their projects.« less
  3. This paper presents a novel architecture to attain a Unified Planner for Socially-aware Navigation (UP-SAN) and explains its need in Socially Assistive Robotics (SAR) applications. Our approach emphasizes interpersonal distance and how spatial communication can be used to build a unified planner for a human-robot collaborative environment. Socially-Aware Navigation (SAN) is vital to make humans feel comfortable and safe around robots, HRI studies have show that the importance of SAN transcendent safety and comfort. SAN plays a crucial role in perceived intelligence, sociability and social capacity of the robot thereby increasing the acceptance of the robots in public places. Humanmore »environments are very dynamic and pose serious social challenges to the robots indented for human interactions. For the robots to cope with the changing dynamics of a situation, there is a need to infer intent and detect changes in the interaction context. SAN has gained immense interest in the social robotics community; to the best of our knowledge, however, there is no planner that can adapt to different interaction contexts spontaneously after autonomously sensing that context. Most of the recent efforts involve social path planning for a single context. In this work, we propose a novel approach for a Unified Planner for SAN that can plan and execute trajectories that are human-friendly for an autonomously sensed interaction context. Our approach augments the navigation stack of Robot Operating System (ROS) utilizing machine learn- ing and optimization tools. We modified the ROS navigation stack using a machine learning-based context classifier and a PaCcET based local planner for us to achieve the goals of UP- SAN. We discuss our preliminary results and concrete plans on putting the pieces together in achieving UP-SAN.« less
  4. As personal social robots become more prevalent, the need for the designs of these systems to explicitly consider pets become more apparent. However, it is not known whether dogs would interact with a social robot. In two experiments, we investigate whether dogs respond to a social robot after the robot called their names, and whether dogs follow the ‘sit’ commands given by the robot. We conducted a between-subjects study (n = 34) to compare dogs’ reactions to a social robot with a loudspeaker. Results indicate that dogs gazed at the robot more often after the robot called their names thanmore »after the loudspeaker called their names. Dogs followed the ‘sit’ commands more often given by the robot than given by the loudspeaker. The contribution of this study is that it is the first study to provide preliminary evidence that 1) dogs showed positive behaviors to social robots and that 2) social robots could influence dog’s behaviors. This study enhance the understanding of the nature of the social interactions between humans and social robots from the evolutionary approach. Possible explanations for the observed behavior might point toward dogs perceiving robots as agents, the embodiment of the robot creating pressure for socialized responses, or the multimodal (i.e., verbal and visual) cues provided by the robot being more attractive than our control condition.« less
  5. The inevitable increase in real-world robot applications will, consequently, lead to more opportunities for robots to have observable failures. Although previous work has explored interaction during robot failure and discussed hypothetical danger, little is known about human reactions to actual robot behaviors involving property damage or bodily harm. An additional, largely unexplored complication is the possible influence of social characteristics in robot design. In this work, we sought to explore these issues through an in-person study with a real robot capable of inducing perceived property damage and personal harm. Participants observed a robot packing groceries and had opportunities to reactmore »to and assist the robot in multiple failure cases. Prior exposure to damage and threat failures decreased assistance rates from approximately 81% to 60%, with variations due to robot facial expressions and other factors. Qualitative data was then analyzed to identify interaction design needs and opportunities for failing robots.« less