skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Award ID contains: 2143109

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Understanding human perceptions of robot performance is crucial for designing socially intelligent robots that can adapt to human expectations. Current approaches often rely on surveys, which can disrupt ongoing human–robot interactions. As an alternative, we explore predicting people’s perceptions of robot performance using non-verbal behavioral cues and machine learning techniques. We contribute the SEAN TOGETHER Dataset consisting of observations of an interaction between a person and a mobile robot in Virtual Reality, together with perceptions of robot performance provided by users on a 5-point scale. We then analyze how well humans and supervised learning techniques can predict perceived robot performance based on different observation types (like facial expression and spatial behavior features). Our results suggest that facial expressions alone provide useful information, but in the navigation scenarios that we considered, reasoning about spatial features in context is critical for the prediction task. Also, supervised learning techniques outperformed humans’ predictions in most cases. Further, when predicting robot performance as a binary classification task on unseen users’ data, the F1-Score of machine learning models more than doubled that of predictions on a 5-point scale. This suggested good generalization capabilities, particularly in identifying performance directionality over exact ratings. Based on these findings, we conducted a real-world demonstration where a mobile robot uses a machine learning model to predict how a human who follows it perceives it. Finally, we discuss the implications of our results for implementing these supervised learning models in real-world navigation. Our work paves the path to automatically enhancing robot behavior based on observations of users and inferences about their perceptions of a robot. 
    more » « less
    Free, publicly-accessible full text available April 18, 2026
  2. Social robots need to be able to interact effectively with small groups. While there is a significant interest in human-robot interaction in groups, little focus has been placed on developing autonomous social robot decision-making methods that operate smoothly with small groups of any size (e.g. 2, 3, or 4 interactants). In this work, we propose a Template- and Graph-based Modeling approach for robots interacting in small groups (TGM), enabling them to interact with groups in a way that is group-size agnostic. Critically, we separate the decision about the target of their communication, or ''whom to address?'' from the decision of ''what to communicate?'', which allows us to use template-based actions. We further use Graph Neural Networks (GNNs) to efficiently decide on ''whom'' and ''what''. We evaluated TGM using imitation learning and compared the structured reasoning achieved through GNNs to unstructured approaches for this two-part decision-making problem. On two different datasets, we show that TGM outperforms the baselines encouraging future work to invest in collecting larger datasets. 
    more » « less
    Free, publicly-accessible full text available March 4, 2026
  3. This work studies the problem of predicting human intent to interact with a robot in a public environment. To facilitate research in this problem domain, we first contribute the People Approaching Robots Database (PAR-D), a new collection of datasets for intent prediction in Human-Robot Interaction. The database includes a subset of the ATC Approach Trajectory dataset [28] with augmented ground truth labels. It also includes two new datasets collected with a robot photographer on two locations of a university campus. Then, we contribute a novel human-annotated baseline for predicting intent. Our results suggest that the robot’s environment and the amount of time that a person is visible impacts human performance in this prediction task. We also provide computational baselines for intent prediction in PAR-D by comparing the performance of several machine learning models, including ones that directly model pedestrian interaction intent and others that predict motion trajectories as an intermediary step. From these models, we find that trajectory prediction seems useful for inferring intent to interact with a robot in a public environment. 
    more » « less
    Free, publicly-accessible full text available November 4, 2025
  4. Robot-moderated group discussions have the potential to facilitate engaging and productive interactions among human participants. Previous work on topic management in conversational agents has predominantly focused on human engagement and topic personalization, with the agent having an active role in the discussion. Also, studies have shown the usefulness of including robots in groups, yet further exploration is still needed for robots to learn when to change the topic while facilitating discussions. Accordingly, our work investigates the suitability of machine-learning models and audiovisual non-verbal features in predicting appropriate topic changes. We utilized interactions between a robot moderator and human participants, which we annotated and used for extracting acoustic and body language-related features. We provide a detailed analysis of the performance of machine learning approaches using sequential and non-sequential data with different sets of features. The results indicate promising performance in classifying inappropriate topic changes, outperforming rule-based approaches. Additionally, acoustic features exhibited comparable performance and robustness compared to the complete set of multimodal features. Our annotated data is publicly available at https://github.com/ghadj/topic-change-robot-discussions-data-2024. 
    more » « less
    Free, publicly-accessible full text available August 26, 2025
  5. Work in Human–Robot Interaction (HRI) has investigated interactions between one human and one robot as well as human–robot group interactions. Yet the field lacks a clear definition and understanding of the influence a robot can exert on interactions between other group members (e.g., human-to-human). In this article, we define Interaction-Shaping Robotics (ISR), a subfield of HRI that investigates robots that influence the behaviors and attitudes exchanged between two (or more) other agents. We highlight key factors of interaction-shaping robots that include the role of the robot, the robot-shaping outcome, the form of robot influence, the type of robot communication, and the timeline of the robot’s influence. We also describe three distinct structures of human–robot groups to highlight the potential of ISR in different group compositions and discuss targets for a robot’s interaction-shaping behavior. Finally, we propose areas of opportunity and challenges for future research in ISR. 
    more » « less
  6. Deploying robots in-the-wild is critical for studying human-robot interaction, since human behavior varies between lab settings and public settings. Though robots that have been used in-the-wild exist, many of these robots are proprietary, expensive, or unavailable. We introduce Shutter, a low-cost, flexible social robot platform for in-the-wild experiments on human-robot interaction. Our demonstration will include a Shutter robot, which consists of a 4-DOF arm with a face screen, and a Kinect sensor. We will demonstrate two different interactions with Shutter: a photo-taking interaction and an embodied explanations interaction. Both interactions have been publicly deployed on the Shutter system. 
    more » « less
  7. A wide range of studies in Human-Robot Interaction (HRI) has shown that robots can influence the social behavior of humans. This phenomenon is commonly explained by the Media Equation. Fundamental to this theory is the idea that when faced with technology (like robots), people perceive it as a social agent with thoughts and intentions similar to those of humans. This perception guides the interaction with the technology and its predicted impact. However, HRI studies have also reported examples in which the Media Equation has been violated, that is when people treat the influence of robots differently from the influence of humans. To address this gap, we propose a model of Robot Social Influence (RoSI) with two contributing factors. The first factor is a robot’s violation of a person’s expectations, whether the robot exceeds expectations or fails to meet expectations. The second factor is a person’s social belonging with the robot, whether the person belongs to the same group as the robot or a different group. These factors are primary predictors of robots’ social influence and commonly mediate the influence of other factors. We review HRI literature and show how RoSI can explain robots’ social influence in concrete HRI scenarios. 
    more » « less