A wide range of studies in Human-Robot Interaction (HRI) has shown that robots can influence the social behavior of humans. This phenomenon is commonly explained by the Media Equation. Fundamental to this theory is the idea that when faced with technology (like robots), people perceive it as a social agent with thoughts and intentions similar to those of humans. This perception guides the interaction with the technology and its predicted impact. However, HRI studies have also reported examples in which the Media Equation has been violated, that is when people treat the influence of robots differently from the influence of humans. To address this gap, we propose a model of Robot Social Influence (RoSI) with two contributing factors. The first factor is a robot’s violation of a person’s expectations, whether the robot exceeds expectations or fails to meet expectations. The second factor is a person’s social belonging with the robot, whether the person belongs to the same group as the robot or a different group. These factors are primary predictors of robots’ social influence and commonly mediate the influence of other factors. We review HRI literature and show how RoSI can explain robots’ social influence in concrete HRI scenarios.
more »
« less
Empathetic Robot with Transformer-Based Di-alogue Agent
Natural Human-Robot interaction (HRI) attracts considerable interest in letting robots understand the users’ emotional state. This paper demonstrates a method to introduce the affection model to the robotic system’s conversational agent to provide natural and empathetic HRI. We use a large-scale pre-trained language model and fine-tune it on a dialogue dataset with empathetic characteristics. Based on existing studies’ progress, we extend the current method and enable the agent to perform advanced sentiment analysis using the affection model. This dialogue agent will allow the robot to provide natural response along with emotion classification and the estimations of arousal and valence level. We evaluate our model using different metrics, comparing it with the recent studies and showing its emotion detection capacity.
more »
« less
- Award ID(s):
- 1846658
- PAR ID:
- 10316815
- Date Published:
- Journal Name:
- International Conference on Ubiquitous Robots (UR)
- Format(s):
- Medium: X
- Sponsoring Org:
- National Science Foundation
More Like this
-
-
Fluency is an important metric in Human-Robot Interaction (HRI) that describes the coordination with which humans and robots collaborate on a task. Fluency is inherently linked to the timing of the task, making temporal constraint networks a promising way to model and measure fluency. We show that the Multi-Agent Daisy Temporal Network (MAD-TN) formulation, which expands on an existing concept of daisy-structured networks, is both an effective model of human-robot collaboration and a natural way to measure a number of existing fluency metrics. The MAD-TN model highlights new metrics that we hypothesize will strongly correlate with human teammates' perception of fluency.more » « less
-
Abstract Effective interactions between humans and robots are vital to achieving shared tasks in collaborative processes. Robots can utilize diverse communication channels to interact with humans, such as hearing, speech, sight, touch, and learning. Our focus, amidst the various means of interactions between humans and robots, is on three emerging frontiers that significantly impact the future directions of human–robot interaction (HRI): (i) human–robot collaboration inspired by human–human collaboration, (ii) brain-computer interfaces, and (iii) emotional intelligent perception. First, we explore advanced techniques for human–robot collaboration, covering a range of methods from compliance and performance-based approaches to synergistic and learning-based strategies, including learning from demonstration, active learning, and learning from complex tasks. Then, we examine innovative uses of brain-computer interfaces for enhancing HRI, with a focus on applications in rehabilitation, communication, brain state and emotion recognition. Finally, we investigate the emotional intelligence in robotics, focusing on translating human emotions to robots via facial expressions, body gestures, and eye-tracking for fluid, natural interactions. Recent developments in these emerging frontiers and their impact on HRI were detailed and discussed. We highlight contemporary trends and emerging advancements in the field. Ultimately, this paper underscores the necessity of a multimodal approach in developing systems capable of adaptive behavior and effective interaction between humans and robots, thus offering a thorough understanding of the diverse modalities essential for maximizing the potential of HRI.more » « less
-
While the ultimate goal of natural-language based Human-Robot Interaction (HRI) may be free-form, mixed-initiative dialogue,social robots deployed in the near future will likely primarily engage in wakeword-driven interaction, in which users’ commands are prefaced by a wakeword such as “Hey, Robot.” This style of interaction helps to allay user privacy concerns, as the robot’s full speech recognition module need not be employed until the target wakeword is used. Unfortunately, there are a number of concerns in the popular media surrounding this style of interaction, with consumers fearing that it is training users (in particular,children) to be rude towards technology, and by extension, rude towards other humans. In this paper, we present a study that demonstrates how an alternate style of wakeword, i.e., “Excuse me, Robot” may allay this concern, by priming users to phrase commands as Indirect Speech Actsmore » « less
-
In this paper, we address the problem of a two-player linear quadratic differential game with incomplete information, a scenario commonly encountered in multi-agent control, human-robot interaction (HRI), and approximation methods to solve general-sum differential games. While solutions to such linear differential games are typically obtained through coupled Riccati equations, the complexity increases when agents have incomplete information, particularly when neither is aware of the other’s cost function. To tackle this challenge, we propose a model-based Peer-Aware Cost Estimation (PACE) framework for learning the cost parameters of the other agent. In PACE, each agent treats its peer as a learning agent rather than a stationary optimal agent, models their learning dynamics, and leverages this dynamic to infer the cost function parameters of the other agent. This approach enables agents to infer each other’s objective function in real time based solely on their previous state observations and dynamically adapt their control policies. Furthermore, we provide a theoretical guarantee for the convergence of parameter estimation and the stability of system states in PACE. Additionally, using numerical studies, we demonstrate how modeling the learning dynamics of the other agent benefits PACE, compared to approaches that approximate the other agent as having complete information, particularly in terms of stability and convergence speed.more » « less
An official website of the United States government

