Working memory is an important component of cognition that infuences key cognitive processes, such as language. As such, working memory should play a key role in cognitive models for languagecapable robots. The ways in which working memory bufers are organized within a robot’s architecture can inform processes such as Referring Expression Generation. Thus, it is important to understand how information and resources within working memory may be organized to lead to human-like robotic language. Previous work on the DIARC cognitive architecture described an entitylevel, feature-based working memory framework in which each known entity had its own dedicated working memory bufer. This paper expands on that framework and proposes a new resource management strategy in which sets of entities that belong to the same type share a single working memory bufer.We end the paper with a brief discussion of how this novel strategy compares to the previously implemented entity-level strategy.
more »
« less
Forget About It: Entity-Level Working Memory Models for Referring Expression Generation in Robot Cognitive Architectures
Working Memory (WM) plays a key role in natural language understanding and generation. To enable a human-like breadth and flexibility of language understanding and generation capabilities, cognitive systems for language-capable robots should feature a human-like WM system in a similarly central role. However, it is still quite unclear how robotic WM should be designed, as a variety of models of human WM have been proposed in cognitive psychology. Moreover, human reliance on WM during language production is sometimes to help the speaker rather than to help hearers. Thus, it is unclear whether different robotic WM systems might harm certain dimensions of interaction for the sake of the robot speaker’s ostensible ease of cognitive processing. In this paper we demonstrate how different models of human WM can be implemented into robot cognitive architectures. Our results suggest that these models can be effective in terms of accuracy, perceived naturalness, and perceived human-likeness.
more »
« less
- Award ID(s):
- 2044865
- PAR ID:
- 10458325
- Date Published:
- Journal Name:
- Annual Meeting of the Cognitive Science Society
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
It is critical for designers of language-capable robots to enable some degree of moral competence in those robots. This is especially critical at this point in history due to the current research climate, in which much natural language generation research focuses on language modeling techniques whose general approach may be categorized as “fabrication by imitation” (the titular mechanical “bull”), which is especially unsuitable in robotic contexts. Furthermore, it is critical for robot designers seeking to enable moral competence to consider previously under-explored moral frameworks that place greater emphasis than traditional Western frameworks on care, equality, and social justice, as the current sociopolitical climate has seen a rise of movements such as libertarian capitalism that have undermined those societal goals. In this paper we examine one alternate framework for the design of morally competent robots, Confucian ethics, and explore how designers may use this framework to enable morally sensitive human-robot communication through three distinct perspectives: (1) How should a robot reason? (2) What should a robot say? and (3) How should a robot act?more » « less
-
In this work, we present methods for using human-robot dialog to improve language understanding for a mobile robot agent. The agent parses natural language to underlying semantic meanings and uses robotic sensors to create multi-modal models of perceptual concepts like red and heavy. The agent can be used for showing navigation routes, delivering objects to people, and relocating objects from one location to another. We use dialog clari_cation questions both to understand commands and to generate additional parsing training data. The agent employs opportunistic active learning to select questions about how words relate to objects, improving its understanding of perceptual concepts. We evaluated this agent on Amazon Mechanical Turk. After training on data induced from conversations, the agent reduced the number of dialog questions it asked while receiving higher usability ratings. Additionally, we demonstrated the agent on a robotic platform, where it learned new perceptual concepts on the y while completing a real-world task.more » « less
-
The attribution of human-like characteristics onto humanoid robots has become a common practice in Human-Robot Interaction by designers and users alike. Robot gendering, the attribution of gender onto a robotic platform via voice, name, physique, or other features is a prevalent technique used to increase aspects of user acceptance of robots. One important factor relating to acceptance is user trust. As robots continue to integrate themselves into common societal roles, it will be critical to evaluate user trust in the robot's ability to perform its job. This paper examines the relationship among occupational gender-roles, user trust and gendered design features of humanoid robots. Results from the study indicate that there was no significant difference in the perception of trust in the robot's competency when considering the gender of the robot. This expands the findings found in prior efforts that suggest performance-based factors have larger influences on user trust than the robot's gender characteristics. In fact, our study suggests that perceived occupational competency is a better predictor for human trust than robot gender or participant gender. As such, gendering in robot design should be considered critically in the context of the application by designers. Such precautions would reduce the potential for robotic technologies to perpetuate societal gender stereotypes.more » « less
-
Incidental human‐robot encounters are becoming more common as robotic technologies proliferate, but there is little scientific understanding of human experience and reactions during these encounters. To contribute towards addressing this gap, this study applies Grounded Theory methodologies to study human reactions in Human‐Robot Encounters with an autonomous quadruped robot. Based upon observation and interviews, we find that participants' reactions to the robot can be explained by their attitudes of familiarity, certainty, and confidence during their encounter and by their understanding of the robot's capabilities and role. Participants differed in how and whether they utilized opportunities to resolve their unfamiliarity, uncertainty, or lack of confidence, shedding light on the dynamics and experiential characteristics of Human‐Robot Encounters. We provide an emerging theory that can be used to unravel the complexity of the field as well as assist hypothesis generation in future research in designing and deploying mobile autonomous service robots.more » « less
An official website of the United States government

