skip to main content

Title: Why Should We Gender?: The Effect of Robot Gendering and Occupational Stereotypes on Human Trust and Perceived Competency
The attribution of human-like characteristics onto humanoid robots has become a common practice in Human-Robot Interaction by designers and users alike. Robot gendering, the attribution of gender onto a robotic platform via voice, name, physique, or other features is a prevalent technique used to increase aspects of user acceptance of robots. One important factor relating to acceptance is user trust. As robots continue to integrate themselves into common societal roles, it will be critical to evaluate user trust in the robot's ability to perform its job. This paper examines the relationship among occupational gender-roles, user trust and gendered design features of humanoid robots. Results from the study indicate that there was no significant difference in the perception of trust in the robot's competency when considering the gender of the robot. This expands the findings found in prior efforts that suggest performance-based factors have larger influences on user trust than the robot's gender characteristics. In fact, our study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender. As such, gendering in robot design should be considered critically in the context of the application by designers. Such precautions would reduce the potential for more » robotic technologies to perpetuate societal gender stereotypes. « less
; ;
Award ID(s):
Publication Date:
Journal Name:
HRI '20: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction
Page Range or eLocation-ID:
13 to 21
Sponsoring Org:
National Science Foundation
More Like this
  1. Abstract Background

    The worldwide population of older adults will soon exceed the capacity of assisted living facilities. Accordingly, we aim to understand whether appropriately designed robots could help older adults stay active at home.


    Building on related literature as well as guidance from experts in game design, rehabilitation, and physical and occupational therapy, we developed eight human-robot exercise games for the Baxter Research Robot, six of which involve physical human-robot contact. After extensive iteration, these games were tested in an exploratory user study including 20 younger adult and 20 older adult users.


    Only socially and physically interactive games fell in the highest ranges for pleasantness, enjoyment, engagement, cognitive challenge, and energy level. Our games successfully spanned three different physical, cognitive, and temporal challenge levels. User trust and confidence in Baxter increased significantly between pre- and post-study assessments. Older adults experienced higher exercise, energy, and engagement levels than younger adults, and women rated the robot more highly than men on several survey questions.


    The results indicate that social-physical exercise with a robot is more pleasant, enjoyable, engaging, cognitively challenging, and energetic than similar interactions that lack physical touch. In addition to this main finding, researchers working in similar areas can build onmore »our design practices, our open-source resources, and the age-group and gender differences that we found.

    « less
  2. As robotic gadgets, and eventually robots, become increasingly common in daily life, it is critical that roboticists design devices that are accepted across cultures. Previous studies have examined cross-cultural differences in robot acceptance based on various design characteristics. Similarly, prior studies have examined cross-cultural perceptions of kawaii (Japanese cuteness). Building on these two prior research strands, this paper reports on our developing approach, with support from a United States National Science Foundation (NSF) International Research Experiences for Undergraduates (IRES) grant, to use a cross-cultural, faculty-student design team to gain a deeper understanding of the role that kawaii (Japanese cuteness) plays in fostering positive human response to, and acceptance of, robotic gadgets across cultures. After explaining the motivation for the work, we outline our approach from both a technical and educational perspective. In doing so, we provide a case study that demonstrates how a cross-cultural design team involving students can simultaneously generate new knowledge and provide research training for future Human Computer Interaction professionals.
  3. Rau, Pei-Luen P (Ed.)
    We report on a cross-cultural collaborative project between students and faculty at DePauw University in the United States and Shibaura Institute of Technology in Japan that used cross-cultural teams to design and evaluate robotic gadgets to gain a deeper understanding of the role that kawaii (Japanese cuteness) plays in fostering positive human response to, and acceptance of, these devices across cultures. Two cross-cultural design teams used Unity and C# to design and implement prototypes of virtual robotic gadgets as well as virtual environments for the robots to interact in. One team designed a virtual train station as well as robotic gadgets to operate in the station. The other team designed a virtual university campus as well as robotic gadgets that operated in that environment. Two versions of each robotic gadget were designed, such that the two versions differed with respect to one kawaii at-tribute (shape, size, etc.) Using these robots, we conducted a formal study that compared perceptions of kawaii robots between American college students and Japanese college students, as well as across genders. The findings revealed that there was not much difference in perception of kawaii across cultures and genders. Furthermore, the study shows that designing a robot tomore »be more kawaii/cute appears to positively influence human preference for being around the robot. This study will inform our long-term goal of designing robots that are appealing across gender and culture.« less
  4. Collaborative robots that work alongside humans will experience service breakdowns and make mistakes. These robotic failures can cause a degradation of trust between the robot and the community being served. A loss of trust may impact whether a user continues to rely on the robot for assistance. In order to improve the teaming capabilities between humans and robots, forms of communication that aid in developing and maintaining trust need to be investigated. In our study, we identify four forms of communication which dictate the timing of information given and type of initiation used by a robot. We investigate the effect that these forms of communication have on trust with and without robot mistakes during a cooperative task. Participants played a memory task game with the help of a humanoid robot that was designed to make mistakes after a certain amount of time passed. The results showed that participants' trust in the robot was better preserved when that robot offered advice only upon request as opposed to when the robot took initiative to give advice.
  5. It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as “fabrication by imitation” (the titular mechanical “bull”), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act?